1
|
Cirelli LK, Talukder LS, Kragness HE. Infant attention to rhythmic audiovisual synchrony is modulated by stimulus properties. Front Psychol 2024; 15:1393295. [PMID: 39027053 PMCID: PMC11256966 DOI: 10.3389/fpsyg.2024.1393295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/06/2024] [Indexed: 07/20/2024] Open
Abstract
Musical interactions are a common and multimodal part of an infant's daily experiences. Infants hear their parents sing while watching their lips move and see their older siblings dance along to music playing over the radio. Here, we explore whether 8- to 12-month-old infants associate musical rhythms they hear with synchronous visual displays by tracking their dynamic visual attention to matched and mismatched displays. Visual attention was measured using eye-tracking while they attended to a screen displaying two videos of a finger tapping at different speeds. These videos were presented side by side while infants listened to an auditory rhythm (high or low pitch) synchronized with one of the two videos. Infants attended more to the low-pitch trials than to the high-pitch trials but did not display a preference for attending to the synchronous hand over the asynchronous hand within trials. Exploratory evidence, however, suggests that tempo, pitch, and rhythmic complexity interactively engage infants' visual attention to a tapping hand, especially when that hand is aligned with the auditory stimulus. For example, when the rhythm was complex and the auditory stimulus was low in pitch, infants attended to the fast hand more when it aligned with the auditory stream than to misaligned trials. These results suggest that the audiovisual integration in rhythmic non-speech contexts is influenced by stimulus properties.
Collapse
Affiliation(s)
- Laura K. Cirelli
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Labeeb S. Talukder
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Haley E. Kragness
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
- Psychology Department, Bucknell University, Lewisburg, PA, United States
| |
Collapse
|
2
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
3
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
4
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
5
|
Development of multisensory integration following prolonged early-onset visual deprivation. Curr Biol 2021; 31:4879-4885.e6. [PMID: 34534443 DOI: 10.1016/j.cub.2021.08.060] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 07/12/2021] [Accepted: 08/23/2021] [Indexed: 11/23/2022]
Abstract
Adult humans make effortless use of multisensory signals and typically integrate them in an optimal fashion.1 This remarkable ability takes many years for normally sighted children to develop.2,3 Would individuals born blind or with extremely low vision still be able to develop multisensory integration later in life when surgically treated for sight restoration? Late acquisition of such capability would be a vivid example of the brain's ability to retain high levels of plasticity. We studied the development of multisensory integration in individuals suffering from congenital dense bilateral cataract, surgically treated years after birth. We assessed cataract-treated individuals' reliance on their restored visual abilities when estimating the size of an object simultaneously explored by touch. Within weeks to months after surgery, when combining information from vision and touch, they developed a multisensory weighting behavior similar to matched typically sighted controls. Next, we tested whether cataract-treated individuals benefited from integrating vision with touch by increasing the precision of size estimates, as it occurs when integrating signals in a statistically optimal fashion.1 For participants retested multiple times, such a benefit developed within months after surgery to levels of precision indistinguishable from optimal behavior. To summarize, the development of multisensory integration does not merely depend on age, but requires extensive multisensory experience with the world, rendered possible by the improved post-surgical visual acuity. We conclude that early exposure to multisensory signals is not essential for the development of multisensory integration, which can still be acquired even after many years of visual deprivation.
Collapse
|
6
|
Lewkowicz DJ, Schmuckler M, Agrawal V. The multisensory cocktail party problem in adults: Perceptual segregation of talking faces on the basis of audiovisual temporal synchrony. Cognition 2021; 214:104743. [PMID: 33940250 DOI: 10.1016/j.cognition.2021.104743] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 04/16/2021] [Accepted: 04/21/2021] [Indexed: 10/21/2022]
Abstract
Social interactions often involve a cluttered multisensory scene consisting of multiple talking faces. We investigated whether audiovisual temporal synchrony can facilitate perceptual segregation of talking faces. Participants either saw four identical or four different talking faces producing temporally jittered versions of the same visible speech utterance and heard the audible version of the same speech utterance. The audible utterance was either synchronized with the visible utterance produced by one of the talking faces or not synchronized with any of them. Eye tracking indicated that participants exhibited a marked preference for the synchronized talking face, that they gazed more at the mouth than the eyes overall, that they gazed more at the eyes of an audiovisually synchronized than a desynchronized talking face, and that they gazed more at the mouth when all talking faces were audiovisually desynchronized. These findings demonstrate that audiovisual temporal synchrony plays a major role in perceptual segregation of multisensory clutter and that adults rely on differential scanning strategies of a talker's eyes and mouth to discover sources of multisensory coherence.
Collapse
Affiliation(s)
- David J Lewkowicz
- Haskins Laboratories, New Haven, CT, USA; Yale Child Study Center, New Haven, CT, USA.
| | - Mark Schmuckler
- Department of Psychology, University of Toronto at Scarborough, Toronto, Canada
| | | |
Collapse
|
7
|
Xu X, Hanganu-Opatz IL, Bieler M. Cross-Talk of Low-Level Sensory and High-Level Cognitive Processing: Development, Mechanisms, and Relevance for Cross-Modal Abilities of the Brain. Front Neurorobot 2020; 14:7. [PMID: 32116637 PMCID: PMC7034303 DOI: 10.3389/fnbot.2020.00007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Accepted: 01/27/2020] [Indexed: 12/18/2022] Open
Abstract
The emergence of cross-modal learning capabilities requires the interaction of neural areas accounting for sensory and cognitive processing. Convergence of multiple sensory inputs is observed in low-level sensory cortices including primary somatosensory (S1), visual (V1), and auditory cortex (A1), as well as in high-level areas such as prefrontal cortex (PFC). Evidence shows that local neural activity and functional connectivity between sensory cortices participate in cross-modal processing. However, little is known about the functional interplay between neural areas underlying sensory and cognitive processing required for cross-modal learning capabilities across life. Here we review our current knowledge on the interdependence of low- and high-level cortices for the emergence of cross-modal processing in rodents. First, we summarize the mechanisms underlying the integration of multiple senses and how cross-modal processing in primary sensory cortices might be modified by top-down modulation of the PFC. Second, we examine the critical factors and developmental mechanisms that account for the interaction between neuronal networks involved in sensory and cognitive processing. Finally, we discuss the applicability and relevance of cross-modal processing for brain-inspired intelligent robotics. An in-depth understanding of the factors and mechanisms controlling cross-modal processing might inspire the refinement of robotic systems by better mimicking neural computations.
Collapse
Affiliation(s)
- Xiaxia Xu
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ileana L Hanganu-Opatz
- Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Malte Bieler
- Laboratory for Neural Computation, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| |
Collapse
|
8
|
Zhao S, Wang Y, Feng C, Feng W. Multiple phases of cross-sensory interactions associated with the audiovisual bounce-inducing effect. Biol Psychol 2019; 149:107805. [PMID: 31689465 DOI: 10.1016/j.biopsycho.2019.107805] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 10/15/2019] [Accepted: 10/28/2019] [Indexed: 12/30/2022]
Abstract
Using event-related potential (ERP) recordings, the present study investigated the cross-modal neural activities underlying the audiovisual bounce-inducing effect (ABE) via a novel experimental design wherein the audiovisual bouncing trials were induced solely by the ABE. The within-subject (percept-based) analysis showed that early cross-modal interactions within 100-200 ms after sound onset over fronto-central and occipital regions were associated with the occurrence of the ABE, but the cross-modal interaction at a later latency (ND250, 220-280 ms) over fronto-central region did not differ between ABE trials and non-ABE trials. The between-subject analysis indicated that the cross-modal interaction revealed by ND250 was larger for subjects who perceived the ABE more frequently. These findings suggest that the ABE is generated as a consequence of the rapid interplay between the variations of early cross-modal interactions and the general multisensory binding predisposition at an individual level.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Yajie Wang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| |
Collapse
|
9
|
Barutchu A, Toohey S, Shivdasani MN, Fifer JM, Crewther SG, Grayden DB, Paolini AG. Multisensory perception and attention in school-age children. J Exp Child Psychol 2019; 180:141-155. [DOI: 10.1016/j.jecp.2018.11.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 11/26/2018] [Accepted: 11/26/2018] [Indexed: 10/27/2022]
|
10
|
Rohlf S, Habets B, von Frieling M, Röder B. Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults. eLife 2017; 6:e28166. [PMID: 28949291 PMCID: PMC5662286 DOI: 10.7554/elife.28166] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood.
Collapse
Affiliation(s)
- Sophie Rohlf
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Boukje Habets
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
- Biological Psychology and Cognitive NeuroscienceUniversity of BielefeldBielefeldGermany
| | - Marco von Frieling
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Brigitte Röder
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| |
Collapse
|
11
|
Minar NJ, Lewkowicz DJ. Overcoming the other-race effect in infancy with multisensory redundancy: 10-12-month-olds discriminate dynamic other-race faces producing speech. Dev Sci 2017; 21:e12604. [PMID: 28944541 DOI: 10.1111/desc.12604] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 07/03/2017] [Indexed: 11/30/2022]
Abstract
We tested 4-6- and 10-12-month-old infants to investigate whether the often-reported decline in infant sensitivity to other-race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own-race or other-race faces which were either accompanied by a speech syllable, no sound, or a non-speech sound. Results indicated that 4-6- and 10-12-month-old infants discriminated own-race as well as other-race faces accompanied by a speech syllable, that only the 10-12-month-olds discriminated silent own-race faces, and that 4-6-month-old infants discriminated own-race and other-race faces accompanied by a non-speech sound but that 10-12-month-old infants only discriminated own-race faces accompanied by a non-speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.
Collapse
Affiliation(s)
- Nicholas J Minar
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| |
Collapse
|
12
|
Zhao S, Wang Y, Jia L, Feng C, Liao Y, Feng W. Pre-coincidence brain activity predicts the perceptual outcome of streaming/bouncing motion display. Sci Rep 2017; 7:8832. [PMID: 28821774 PMCID: PMC5562831 DOI: 10.1038/s41598-017-08801-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Accepted: 07/13/2017] [Indexed: 11/22/2022] Open
Abstract
When two identical visual discs move toward each other on a two-dimensional visual display, they can be perceived as either "streaming through" or "bouncing off" each other after their coincidence. Previous studies have observed a strong bias toward the streaming percept. Additionally, the incidence of the bouncing percept in this ambiguous display could be increased by various factors, such as a brief sound at the moment of coincidence and a momentary pause of the two discs. The streaming/bouncing bistable motion phenomenon has been studied intensively since its discovery. However, little is known regarding the neural basis underling the perceptual ambiguity in the classic version of the streaming/bouncing motion display. The present study investigated the neural basis of the perception disambiguating underling the processing of the streaming/bouncing bistable motion display using event-related potential (ERP) recordings. Surprisingly, the amplitude of frontal central P2 (220-260 ms) that was elicited by the moving discs ~200 ms before the coincidence of the two discs was observed to be predictive of subsequent streaming or bouncing percept. A larger P2 amplitude was observed for streaming percept than the bouncing percept. These findings suggest that the streaming/bouncing bistable perception may have been disambiguated unconsciously ~200 ms before the coincidence of the two discs.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Yajie Wang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Lina Jia
- Department of Education, School of Humanities, Jiang Nan University, Wuxi, 214122, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China
| | - Yu Liao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu, 215123, China.
| |
Collapse
|
13
|
Thomas RL, Nardini M, Mareschal D. The impact of semantically congruent and incongruent visual information on auditory object recognition across development. J Exp Child Psychol 2017; 162:72-88. [PMID: 28595113 DOI: 10.1016/j.jecp.2017.04.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 10/19/2022]
Abstract
The ability to use different sensory signals in conjunction confers numerous advantages on perception. Multisensory perception in adults is influenced by factors beyond low-level stimulus properties such as semantic congruency. Sensitivity to semantic relations has been shown to emerge early in development; however, less is known about whether implementation of these associations changes with development or whether development in the representations themselves might modulate their influence. Here, we used a Stroop-like paradigm that requires participants to identify an auditory stimulus while ignoring a visual stimulus. Prior research shows that in adults visual distractors have more impact on processing of auditory objects than vice versa; however, this pattern appears to be inverted early in development. We found that children from 8years of age (and adults) gain a speed advantage from semantically congruent visual information and are disadvantaged by semantically incongruent visual information. At 6years of age, children gain a speed advantage for semantically congruent visual information but are not disadvantaged by semantically incongruent visual information (as compared with semantically unrelated visual information). Both children and adults were influenced by associations between auditory and visual stimuli, which they had been exposed to on only 12 occasions during the learning phase of the study. Adults showed a significant speed advantage over children for well-established associations but showed no such advantage for newly acquired pairings. This suggests that the influence of semantic associations on multisensory processing does not change with age but rather these associations become more robust and, in turn, more influential.
Collapse
Affiliation(s)
- Rhiannon L Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths College, University of London, London SE14 6NW, UK; Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK
| | - Marko Nardini
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK; Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK.
| |
Collapse
|
14
|
de Boisferon AH, Tift AH, Minar NJ, Lewkowicz DJ. Selective attention to a talker's mouth in infancy: role of audiovisual temporal synchrony and linguistic experience. Dev Sci 2017; 20:10.1111/desc.12381. [PMID: 26743437 PMCID: PMC6340138 DOI: 10.1111/desc.12381] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 10/09/2015] [Indexed: 11/28/2022]
Abstract
Previous studies have found that infants shift their attention from the eyes to the mouth of a talker when they enter the canonical babbling phase after 6 months of age. Here, we investigated whether this increased attentional focus on the mouth is mediated by audio-visual synchrony and linguistic experience. To do so, we tracked eye gaze in 4-, 6-, 8-, 10-, and 12-month-old infants while they were exposed either to desynchronized native or desynchronized non-native audiovisual fluent speech. Results indicated that, regardless of language, desynchronization disrupted the usual pattern of relative attention to the eyes and mouth found in response to synchronized speech at 10 months but not at any other age. These findings show that audio-visual synchrony mediates selective attention to a talker's mouth just prior to the emergence of initial language expertise and that it declines in importance once infants become native-language experts.
Collapse
|
15
|
Murray MM, Lewkowicz DJ, Amedi A, Wallace MT. Multisensory Processes: A Balancing Act across the Lifespan. Trends Neurosci 2016; 39:567-579. [PMID: 27282408 PMCID: PMC4967384 DOI: 10.1016/j.tins.2016.05.003] [Citation(s) in RCA: 137] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Revised: 04/13/2016] [Accepted: 05/12/2016] [Indexed: 11/20/2022]
Abstract
Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, University Hospital Centre and University of Lausanne, Lausanne, Switzerland; Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem, Israel; Interdisciplinary and Cognitive Science Program, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
16
|
Goswami U. Educational neuroscience: neural structure-mapping and the promise of oscillations. Curr Opin Behav Sci 2016. [DOI: 10.1016/j.cobeha.2016.05.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
Chen YC, Shore DI, Lewis TL, Maurer D. The development of the perception of audiovisual simultaneity. J Exp Child Psychol 2016; 146:17-33. [DOI: 10.1016/j.jecp.2016.01.010] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Revised: 01/09/2016] [Accepted: 01/12/2016] [Indexed: 10/22/2022]
|
18
|
Dionne-Dostie E, Paquette N, Lassonde M, Gallagher A. Multisensory integration and child neurodevelopment. Brain Sci 2015; 5:32-57. [PMID: 25679116 PMCID: PMC4390790 DOI: 10.3390/brainsci5010032] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2014] [Accepted: 01/27/2015] [Indexed: 12/17/2022] Open
Abstract
A considerable number of cognitive processes depend on the integration of multisensory information. The brain integrates this information, providing a complete representation of our surrounding world and giving us the ability to react optimally to the environment. Infancy is a period of great changes in brain structure and function that are reflected by the increase of processing capacities of the developing child. However, it is unclear if the optimal use of multisensory information is present early in childhood or develops only later, with experience. The first part of this review has focused on the typical development of multisensory integration (MSI). We have described the two hypotheses on the developmental process of MSI in neurotypical infants and children, and have introduced MSI and its neuroanatomic correlates. The second section has discussed the neurodevelopmental trajectory of MSI in cognitively-challenged infants and children. A few studies have brought to light various difficulties to integrate sensory information in children with a neurodevelopmental disorder. Consequently, we have exposed certain possible neurophysiological relationships between MSI deficits and neurodevelopmental disorders, especially dyslexia and attention deficit disorder with/without hyperactivity.
Collapse
Affiliation(s)
- Emmanuelle Dionne-Dostie
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| | - Natacha Paquette
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| | - Maryse Lassonde
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| | - Anne Gallagher
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| |
Collapse
|
19
|
Abstract
Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked whether the voice and face went together (Experiment 1) or whether the desynchronized videos differed from the synchronized one (Experiment 2). Four-year-olds detected the 666-ms asynchrony, 5-year-olds detected the 666- and 500-ms asynchrony, and 6-year-olds detected all asynchronies. These results show that the A-V temporal binding window narrows slowly during early childhood and that it is still wider at 6 years of age than in older children and adults.
Collapse
|
20
|
Lewkowicz DJ. Early experience and multisensory perceptual narrowing. Dev Psychobiol 2014; 56:292-315. [PMID: 24435505 PMCID: PMC3953347 DOI: 10.1002/dev.21197] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2013] [Accepted: 12/13/2013] [Indexed: 11/07/2022]
Abstract
Perceptual narrowing reflects the effects of early experience and contributes in key ways to perceptual and cognitive development. Previous studies have found that unisensory perceptual sensitivity in young infants is broadly tuned such that they can discriminate native as well as non-native sensory inputs but that it is more narrowly tuned in older infants such that they only respond to native inputs. Recently, my coworkers and I discovered that multisensory perceptual sensitivity narrows as well. The present article reviews this new evidence in the general context of multisensory perceptual development and the effects of early experience. Together, the evidence on unisensory and multisensory narrowing shows that early experience shapes the emergence of perceptual specialization and expertise.
Collapse
Affiliation(s)
- David J Lewkowicz
- Department of Psychology & Center for Complex Systems & Brain Sciences, Florida Atlantic University, 777 Glades Rd, Boca Raton, FL, 33431.
| |
Collapse
|
21
|
Martin JR. Experiences of activity and causality in schizophrenia: when predictive deficits lead to a retrospective over-binding. Conscious Cogn 2013; 22:1361-74. [PMID: 24095708 DOI: 10.1016/j.concog.2013.09.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Revised: 09/04/2013] [Accepted: 09/07/2013] [Indexed: 11/15/2022]
Abstract
In this paper I discuss an intriguing and relatively little studied symptomatic expression of schizophrenia known as experiences of activity in which patients form the delusion that they can control some external events by the sole means of their mind. I argue that experiences of activity result from patients being prone to aberrantly infer causal relations between unrelated events in a retrospective way owing to widespread predictive deficits. Moreover, I suggest that such deficits may, in addition, lead to an aberrant intentional binding effect i.e., the subjective compression of the temporal interval between an intentional action and its external effects (Haggard, Clark, & Kalogeras, 2002). In particular, it might be that patient's thoughts are bound to the external events they aimed to control producing, arguably, a temporal contiguity between these two components. Such temporal contiguity would reinforce or sustain the (causal) feeling that the patient mind is directly causally efficient.
Collapse
Affiliation(s)
- Jean-Rémy Martin
- Université Paris VI (UPMC), Institut d'Étude de la Cognition and Institut Jean-Nicod (ENS-EHESS-CNRS), Paris, France.
| |
Collapse
|
22
|
Lewkowicz DJ. Development of ordinal sequence perception in infancy. Dev Sci 2013; 16:352-64. [PMID: 23587035 PMCID: PMC3954567 DOI: 10.1111/desc.12029] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2012] [Accepted: 10/24/2012] [Indexed: 11/30/2022]
Abstract
Perception of the ordinal position of a sequence element is critical to many cognitive and motor functions. Here, the prediction that this ability is based on a domain-general perceptual mechanism and, thus, that it emerges prior to the emergence of language was tested. Infants were habituated with sequences of moving/sounding objects and then tested for the ability to perceive the invariant ordinal position of a single element (Experiment 1) or the invariant relative ordinal position of two adjacent elements (Experiment 2). Experiment 1 tested 4- and 6-month-old infants and showed that 4-month-old infants focused on conflicting low-level sequence statistics and, therefore, failed to detect the ordinal position information, but that 6-month-old infants ignored the statistics and detected the ordinal position information. Experiment 2 tested 6-, 8-, and 10-month-old infants and showed that only 10-month-old infants detected relative ordinal position information and that they could only accomplish this with the aid of concurrent statistical cues. Together, these results indicate that a domain-general ability to detect ordinal position information emerges during infancy and that its initial emergence is preceded and facilitated by the earlier emergence of the ability to detect statistical cues.
Collapse
Affiliation(s)
- David J Lewkowicz
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA.
| |
Collapse
|
23
|
Lewkowicz DJ, Pons F. Recognition of Amodal Language Identity Emerges in Infancy. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2013; 37:90-94. [PMID: 24648601 PMCID: PMC3956126 DOI: 10.1177/0165025412467582] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Audiovisual speech consists of overlapping and invariant patterns of dynamic acoustic and optic articulatory information. Research has shown that infants can perceive a variety of basic audio-visual (A-V) relations but no studies have investigated whether and when infants begin to perceive higher order A-V relations inherent in speech. Here, we asked whether and when infants become capable of recognizing amodal language identity, a critical perceptual skill that is necessary for the development of multisensory communication. Because, at a minimum, such a skill requires the ability to perceive suprasegmental auditory and visual linguistic information, we predicted that this skill would not emerge before higher-level speech processing and multisensory integration skills emerge. Consistent with this prediction, we found that recognition of the amodal identity of language emerges at 10-12 months of age but that when it emerges it is restricted to infants' native language.
Collapse
Affiliation(s)
- David J Lewkowicz
- Department of Psychology & Center for Complex Systems & Brain Science Florida Atlantic University Boca Raton, FL 33431, USA
| | - Ferran Pons
- Institute for Brain, Cognition and Behaviour (IR3C), & Departament de Psicologia Bàsica, Facultat de Psicología. Universitat de Barcelona Pg. Vall d'Hebrón 171. 08035, Barcelona, Spain
| |
Collapse
|
24
|
Watanabe H, Homae F, Nakano T, Tsuzuki D, Enkhtur L, Nemoto K, Dan I, Taga G. Effect of auditory input on activations in infant diverse cortical regions during audiovisual processing. Hum Brain Mapp 2011; 34:543-65. [PMID: 22102331 DOI: 10.1002/hbm.21453] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2010] [Revised: 07/03/2011] [Accepted: 08/08/2011] [Indexed: 11/05/2022] Open
Abstract
A fundamental question with regard to perceptual development is how multisensory information is processed in the brain during the early stages of development. Although a growing body of evidence has shown the early emergence of modality-specific functional differentiation of the cortical regions, the interplay between sensory inputs from different modalities in the developing brain is not well understood. To study the effects of auditory input during audio-visual processing in 3-month-old infants, we evaluated the spatiotemporal cortical hemodynamic responses of 50 infants while they perceived visual objects with or without accompanying sounds. The responses were measured using 94-channel near-infrared spectroscopy over the occipital, temporal, and frontal cortices. The effects of sound manipulation were pervasive throughout the diverse cortical regions and were specific to each cortical region. Visual stimuli co-occurring with sound induced the early-onset activation of the early auditory region, followed by activation of the other regions. Removal of the sound stimulus resulted in focal deactivation in the auditory regions and reduced activation in the early visual region, the association region of the temporal and parietal cortices, and the anterior prefrontal regions, suggesting multisensory interplay. In contrast, equivalent activations were observed in the lateral occipital and lateral prefrontal regions, regardless of sound manipulation. Our findings indicate that auditory input did not generally enhance overall activation in relation to visual perception, but rather induced specific changes in each cortical region. The present study implies that 3-month-old infants may perceive audio-visual multisensory inputs by using the global network of functionally differentiated cortical regions.
Collapse
Affiliation(s)
- Hama Watanabe
- Graduate School of Education, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
25
|
Tschacher W, Bergomi C. Cognitive binding in schizophrenia: weakened integration of temporal intersensory information. Schizophr Bull 2011; 37 Suppl 2:S13-22. [PMID: 21860043 PMCID: PMC3160115 DOI: 10.1093/schbul/sbr074] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Cognitive functioning is based on binding processes, by which different features and elements of neurocognition are integrated and coordinated. Binding is an essential ingredient of, for instance, Gestalt perception. We have implemented a paradigm of causality perception based on the work of Albert Michotte, in which 2 identical discs move from opposite sides of a monitor, steadily toward, and then past one another. Their coincidence generates an ambiguous percept of either "streaming" or "bouncing," which the subjects (34 schizophrenia spectrum patients and 34 controls with mean age 27.9 y) were instructed to report. The latter perception is a marker of the binding processes underlying perceived causality (type I binding). In addition to this visual task, acoustic stimuli were presented at different times during the task (150 ms before and after visual coincidence), which can modulate perceived causality. This modulation by intersensory and temporally delayed stimuli is viewed as a different type of binding (type II). We show here, using a mixed-effects hierarchical analysis, that type II binding distinguishes schizophrenia spectrum patients from healthy controls, whereas type I binding does not. Type I binding may even be excessive in some patients, especially those with positive symptoms; Type II binding, however, was generally attenuated in patients. The present findings point to ways in which the disconnection (or Gestalt) hypothesis of schizophrenia can be refined, suggesting more specific markers of neurocognitive functioning and potential targets of treatment.
Collapse
Affiliation(s)
- Wolfgang Tschacher
- University Hospital of Psychiatry, University of Bern, Laupenstrasse 49, 3010 Bern, Switzerland.
| | | |
Collapse
|
26
|
|
27
|
Lewkowicz D. Development of Multisensory Temporal Perception. Front Neurosci 2011. [DOI: 10.1201/9781439812174-22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
28
|
Innes-Brown H, Barutchu A, Shivdasani MN, Crewther DP, Grayden DB, Paolini AG. Susceptibility to the flash-beep illusion is increased in children compared to adults. Dev Sci 2011; 14:1089-99. [DOI: 10.1111/j.1467-7687.2011.01059.x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
29
|
Bremner JG, Slater AM, Johnson SP, Mason UC, Spring J, Bremner ME. Two- to eight-month-old infants' perception of dynamic auditory-visual spatial colocation. Child Dev 2011; 82:1210-23. [PMID: 21545580 DOI: 10.1111/j.1467-8624.2011.01593.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
From birth, infants detect associations between the locations of static visual objects and sounds they emit, but there is limited evidence regarding their sensitivity to the dynamic equivalent when a sound-emitting object moves. In 4 experiments involving thirty-six 2-month-olds, forty-eight 5-month-olds, and forty-eight 8-month-olds, we investigated infants' ability to process this form of spatial colocation. Whereas there was no evidence of spontaneous sensitivity, all age groups detected a dynamic colocation during habituation and looked longer at test trials in which sound and sight were dislocated. Only 2-month-olds showed clear sensitivity to the dislocation relation, although 8-month-olds did so following additional habituation. These results are discussed relative to the intersensory redundancy hypothesis and work suggesting increasing specificity in processing with age.
Collapse
Affiliation(s)
- J Gavin Bremner
- Psychology Department, Centre for Research in Human Development, Lancaster University, UK.
| | | | | | | | | | | |
Collapse
|
30
|
Hyde DC, Jones BL, Flom R, Porter CL. Neural signatures of face-voice synchrony in 5-month-old human infants. Dev Psychobiol 2011; 53:359-70. [DOI: 10.1002/dev.20525] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Accepted: 12/07/2010] [Indexed: 11/10/2022]
|
31
|
Grove PM, Sakurai K. Auditory induced bounce perception persists as the probability of a motion reversal is reduced. Perception 2010; 38:951-65. [PMID: 19764299 DOI: 10.1068/p5860] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
When two identical targets move toward one another from opposite sides of a display and continue past one another along collinear trajectories, they can be perceived to either stream past or bounce off of one another. Streaming is the dominant perception in motion displays free of additional transients, while bouncing predominates when a transient (eg auditory or visual) is presented at the point of coincidence. We investigated whether the auditory induced bias towards bouncing would persist as the probability of a motion reversal was reduced by introducing a spatial offset either vertically in a 2-D display or in depth in a 3-D display. Offset conditions were combined with two auditory conditions (tone or no-tone at the point of coincidence) in the presence or absence of a central occluder. In conditions with no sound, streaming was reported on a clear majority of trials, regardless of spatial offset. When a transient tone was presented, reported motion reversals dominated and persisted for increasing verbal offsets up to 17.9 min of arc and for 3-D trajectory offsets up to 25.6 min of arc. The bounce-promoting effect of an auditory tone at the point of coincidence in stream/bounce displays persists in spite of rendering the visual motion sequence unambiguous and more consistent with streaming.
Collapse
Affiliation(s)
- Philip M Grove
- School of Psychology, The University of Queensland, St Lucia, Brisbane, QLD 4072, Australia.
| | | |
Collapse
|
32
|
Lewkowicz DJ, Leo I, Simion F. Intersensory Perception at Birth: Newborns Match Nonhuman Primate Faces and Voices. INFANCY 2010; 15:46-60. [DOI: 10.1111/j.1532-7078.2009.00005.x] [Citation(s) in RCA: 105] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
33
|
Lewkowicz DJ. Perception of Dynamic and Static Audiovisual Sequences in 3- and 4-Month-Old Infants. Child Dev 2008; 79:1538-54. [DOI: 10.1111/j.1467-8624.2008.01204.x] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
34
|
Wada Y, Shirai N, Otsuka Y, Midorikawa A, Kanazawa S, Dan I, Yamaguchi MK. Sound enhances detection of visual target during infancy: a study using illusory contours. J Exp Child Psychol 2008; 102:315-22. [PMID: 18755476 DOI: 10.1016/j.jecp.2008.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2007] [Revised: 07/07/2008] [Accepted: 07/08/2008] [Indexed: 11/24/2022]
Abstract
In adults, a salient tone embedded in a sequence of nonsalient tones improves detection of a synchronously and briefly presented visual target in a rapid, visually distracting sequence. This phenomenon indicates that perception from one sensory modality can be influenced by another one even when the latter modality provides no information about the judged property itself. However, no study has revealed the age-related development of this kind of cross-modal enhancement. Here we tested the effect of concurrent and unique sounds on detection of illusory contours during infancy. We used a preferential looking technique to investigate whether audio-visual enhancement of the detection of illusory contours could be observed at 5, 6, and 7 months of age. A significant enhancement, induced by sound, of the preference for illusory contours was observed only in the 7-month-olds. These results suggest that audio-visual enhancement in visual target detection emerges at 7 months of age.
Collapse
Affiliation(s)
- Yuji Wada
- Sensory and Cognitive Food Science Laboratory, National Food Research Institute, Tsukuba, Ibaraki, Japan.
| | | | | | | | | | | | | |
Collapse
|
35
|
Tremblay C, Champoux F, Voss P, Bacon BA, Lepore F, Théoret H. Speech and non-speech audio-visual illusions: a developmental study. PLoS One 2007; 2:e742. [PMID: 17710142 PMCID: PMC1937019 DOI: 10.1371/journal.pone.0000742] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2007] [Accepted: 07/16/2007] [Indexed: 11/19/2022] Open
Abstract
It is well known that simultaneous presentation of incongruent audio and visual stimuli can lead to illusory percepts. Recent data suggest that distinct processes underlie non-specific intersensory speech as opposed to non-speech perception. However, the development of both speech and non-speech intersensory perception across childhood and adolescence remains poorly defined. Thirty-eight observers aged 5 to 19 were tested on the McGurk effect (an audio-visual illusion involving speech), the Illusory Flash effect and the Fusion effect (two audio-visual illusions not involving speech) to investigate the development of audio-visual interactions and contrast speech vs. non-speech developmental patterns. Whereas the strength of audio-visual speech illusions varied as a direct function of maturational level, performance on non-speech illusory tasks appeared to be homogeneous across all ages. These data support the existence of independent maturational processes underlying speech and non-speech audio-visual illusory effects.
Collapse
Affiliation(s)
- Corinne Tremblay
- Department of Psychology, University of Montreal, Montreal, Canada
- Research Center, Sainte-Justine Hospital, Montreal, Canada
| | - François Champoux
- Speech Language Pathology and Audiology, University of Montreal, Montreal, Canada
| | - Patrice Voss
- Department of Psychology, University of Montreal, Montreal, Canada
| | - Benoit A. Bacon
- Department of Psychology, Bishop's University, Sherbrooke, Quebec, Canada
| | - Franco Lepore
- Department of Psychology, University of Montreal, Montreal, Canada
- Research Center, Sainte-Justine Hospital, Montreal, Canada
| | - Hugo Théoret
- Department of Psychology, University of Montreal, Montreal, Canada
- Research Center, Sainte-Justine Hospital, Montreal, Canada
- * To whom correspondence should be addressed. E-mail:
| |
Collapse
|
36
|
Zhou F, Wong V, Sekuler R. Multi-sensory integration of spatio-temporal segmentation cues: one plus one does not always equal two. Exp Brain Res 2007; 180:641-54. [PMID: 17333010 DOI: 10.1007/s00221-007-0897-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2006] [Accepted: 01/16/2007] [Indexed: 10/23/2022]
Abstract
How are multiple, multi-sensory stimuli combined for use in segmenting spatio-temporal events? For an answer, we measured the effect of various auditory or visual stimuli, in isolation or in combination, on a bistable percept of visual motion ("bouncing" vs. "streaming"). To minimize individual differences, the physical properties of stimuli were adjusted to reflect individual subjects' sensitivity to each cue in isolation. When put into combination, perceptual influences that had been equipotent in isolation were substantially altered. Specifically, auditory cues that had been strong when presented alone were greatly reduced in combination. Evaluation of alternative models of sensory integration showed that the state of the visual bistable percept could not be accounted for by probability summation among cues, as might occur at the level of decision processes. Instead, the state of the bistable percept was well predicted from a weighted sum of cues, with visual cues strongly dominating auditory cues. Finally, when cue weights were compared for individual subjects, it was found that subjects differ somewhat in the strategy they use for integrating multi-sensory information.
Collapse
Affiliation(s)
- Feng Zhou
- Brandeis University, Mailstop 013, Waltham, MA 02454, USA
| | | | | |
Collapse
|
37
|
Kawabe T, Miura K. Effects of the orientation of moving objects on the perception of streaming/bouncing motion displays. ACTA ACUST UNITED AC 2006; 68:750-8. [PMID: 17076343 DOI: 10.3758/bf03193698] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study, we examined the contribution of the orientation of moving objects to perception of a streaming/bouncing motion display. In three experiments, participants reported which of the two types of motion, streaming or bouncing, they perceived. The following independent variables were used: orientation differences between Gabor micropatterns (Gabors) and their path of motion (all the experiments) and the presence/absence of a transient tone (Experiment 1), transient visual flash (Experiment 2), or concurrent secondary task (Experiment 3) at the coincidence of Gabors. The results showed that the events at coincidence generally biased responses toward the perception of bouncing. On the other hand, alignment of Gabors with their motion axes significantly reduced the frequency of bounce perception. The results also indicated that an object whose orientation was parallel to its motion path strengthened the spatiotemporal integration of local motion signals along a straight motion path, resulting in the perception of streaming. We suggest that the effect of collinearity between Gabors and their motion path is relatively free from the effect of attention distraction.
Collapse
Affiliation(s)
- Takahiro Kawabe
- Department of Psychology, Faculty of Letters, Kyushu University, 6-19-1, Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan.
| | | |
Collapse
|
38
|
Neil PA, Chee-Ruiter C, Scheier C, Lewkowicz DJ, Shimojo S. Development of multisensory spatial integration and perception in humans. Dev Sci 2006; 9:454-64. [PMID: 16911447 DOI: 10.1111/j.1467-7687.2006.00512.x] [Citation(s) in RCA: 112] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.
Collapse
Affiliation(s)
- Patricia A Neil
- Computation and Neural Systems Department, California Institute of Technology, USA.
| | | | | | | | | |
Collapse
|
39
|
Lewkowicz DJ, Marcovitch S. Perception of audiovisual rhythm and its invariance in 4- to 10-month-old infants. Dev Psychobiol 2006; 48:288-300. [PMID: 16617468 DOI: 10.1002/dev.20140] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This study investigated the perception of complex audiovisual rhythmic patterns in 4-, 6-, 8-, and 10-month-old human infants. In Experiment 1, we first habituated infants to an event in which an object could be seen and heard bouncing in a rhythmic fashion. We then tested them to determine if they would detect a relative temporal pattern change produced by rearranging the intrapattern intervals. Regardless of age, infants successfully detected the pattern change. In Experiment 2, we asked whether infants also can extract rhythmic pattern invariance amid tempo variations. Thus, we first habituated infants to a particular rhythmic pattern but this time varying in its tempo of presentation across trials. We then administered one test trial in which a novel rhythm was presented at a familiar tempo and another test trial in which a familiar rhythm was presented at a novel tempo. Infants detected both types of changes indicating that they perceived the invariant rhythm and that they did so despite the fact that they also detected the varying tempo. Overall, the findings demonstrate that infants between 4 and 10 months of age can perceive and discriminate complex audiovisual temporal patterns on the basis of relative temporal differences and that they also can learn the invariant nature of such patterns.
Collapse
Affiliation(s)
- David J Lewkowicz
- Florida Atlantic University, 777 Glades Rd., Boca Raton, Florida 33431, USA.
| | | |
Collapse
|
40
|
Mitroff SR, Scholl BJ, Wynn K. The relationship between object files and conscious perception. Cognition 2005; 96:67-92. [PMID: 15833307 DOI: 10.1016/j.cognition.2004.03.008] [Citation(s) in RCA: 55] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2003] [Accepted: 03/29/2004] [Indexed: 11/18/2022]
Abstract
Object files (OFs) are hypothesized mid-level representations which mediate our conscious perception of persisting objects-e.g. telling us 'which went where'. Despite the appeal of the OF framework, not previous research has directly explored whether OFs do indeed correspond to conscious percepts. Here we present at least one case wherein conscious percepts of 'which went where' in dynamic ambiguous displays diverge from the analogous correspondence computed by the OF system. Observers viewed a 'bouncing/streaming' display in which two identical objects moved such that they could have either bounced off or streamed past each other. We measured two dependent variables: (1) an explicit report of perceived bouncing or streaming; and (2) an implicit 'object-specific preview benefit' (OSPB), wherein a 'preview' of information on a specific object speeds the recognition of that information at a later point when it appears again on the same object (compared to when it reappears on a different object), beyond display-wide priming. When the displays were manipulated such that observers had a strong bias to perceive streaming (on over 95% of the trials), there was nevertheless a strong OSPB in the opposite direction-such that the object files appeared to have 'bounced' even though the percept 'streamed'. Given that OSPBs have been taken as a hallmark of the operation of object files, the five experiments reported here suggest that in at least some specialized (and perhaps ecologically invalid) cases, conscious percepts of 'which went where' in dynamic ambiguous displays can diverge from the mapping computed by the object-file system.
Collapse
Affiliation(s)
- Stephen R Mitroff
- Department of Psychology, Yale University, Box 208205, New Haven, CT 06520-8205, USA.
| | | | | |
Collapse
|
41
|
Abstract
Serial order is fundamental to perception, cognition and behavioral action. Three experiments investigated infants' perception, learning and discrimination of serial order. Four- and 8-month-old infants were habituated to three sequentially moving objects making visible and audible impacts and then were tested on separate test trials for their ability to detect auditory, visual or auditory-visual changes in their ordering. The 4-month-old infants did not respond to any order changes and instead appeared to attend to the 'local' audio-visual synchrony part of the event. When this local part of the event was blocked from view, the 4-month-olds did perceive the serial order feature of the event but only when it was specified multimodally. In contrast, the 8-month-old infants perceived all three kinds of order changes regardless of whether the synchrony part of the event was visible or not. The findings show that perception of spatiotemporal serial order emerges early in infancy and that its perception is initially facilitated by multimodal specification.
Collapse
Affiliation(s)
- David J Lewkowicz
- Department of Psychology, Florida Atlantic University, Davie 33314, USA.
| |
Collapse
|
42
|
Richardson DC, Kirkham NZ. Multimodal Events and Moving Locations: Eye Movements of Adults and 6-Month-Olds Reveal Dynamic Spatial Indexing. ACTA ACUST UNITED AC 2004; 133:46-62. [PMID: 14979751 DOI: 10.1037/0096-3445.133.1.46] [Citation(s) in RCA: 125] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The ability to keep track of locations in a dynamic, multimodal environment is crucial for successful interactions with other people and objects. The authors investigated the existence and flexibility of spatial indexing in adults and 6-month-old infants by adapting an eye-tracking paradigm from D. C. Richardson and M. J. Spivey (2000). Multimodal events were presented in specific locations, and eye movements were measured when the auditory portion of the stimulus was presented without its visual counterpart. Experiment 1 showed that adults spatially index auditory information even when the original associated locations move. Experiments 2 and 3 showed that infants are capable of both binding multimodal events to locations and tracking those locations when they move.
Collapse
|
43
|
Lalanne C, Lorenceau J. Crossmodal integration for perception and action. JOURNAL OF PHYSIOLOGY, PARIS 2004; 98:265-79. [PMID: 15477038 DOI: 10.1016/j.jphysparis.2004.06.001] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
The integration of information from different sensory modalities has many advantages for human observers, including increase of salience, resolution of perceptual ambiguities, and unified perception of objects and surroundings. Several behavioral, electrophysiological and neuroimaging data collected in various tasks, including localization and detection of spatial events, crossmodal perception of object properties and scene analysis are reviewed here. All the results highlight the multiple faces of crossmodal interactions and provide converging evidence that the brain takes advantages of spatial and temporal coincidence between spatial events in the crossmodal binding of spatial features gathered through different modalities. Furthermore, the elaboration of a multimodal percept appears to be based on an adaptive combination of the contribution of each modality, according to the intrinsic reliability of sensory cue, which itself depends on the task at hand and the kind of perceptual cues involved in sensory processing. Computational models based on bayesian sensory estimation provide valuable explanations of the way perceptual system could perform such crossmodal integration. Recent anatomical evidence suggest that crossmodal interactions affect early stages of sensory processing, and could be mediated through a dynamic recurrent network involving backprojections from multimodal areas as well as lateral connections that can modulate the activity of primary sensory cortices, though future behavioral and neurophysiological studies should allow a better understanding of the underlying mechanisms.
Collapse
Affiliation(s)
- Christophe Lalanne
- UNIC, CNRS UPR 2191, 1 avenue de la Terrasse, F91198 Gif-sur-Yvette, France.
| | | |
Collapse
|
44
|
Lewkowicz DJ. Heterogeneity and heterochrony in the development of intersensory perception. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2002; 14:41-63. [PMID: 12063129 DOI: 10.1016/s0926-6410(02)00060-5] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
It is now well established that a variety of intersensory perceptual skills emerge in early human development. Empirical evidence from studies in the author's as well as other laboratories charting the developmental emergence of these abilities is reviewed. The evidence is considered in terms of the currently dominant theoretical view of intersensory development that assigns the detection of amodal invariants a primary and foundational role. It is argued that this view is inadequate because the detection of amodal invariants is only one of three distinct intersensory integration processes. It is noted that the other two processes, namely, intersensory association of modality-specific cues and non-specific effects of stimulation in one modality on responsiveness to stimulation in another modality, are equally important and that the operation of all three and, in particular, the relation between them, must be studied to attain a complete understanding of intersensory perceptual development. It is suggested that the theoretical approach to the development of intersensory perception should be broadened to include all three types of processes and that developmental studies must respect basic facts and principles of development. To this end, a developmental systems approach is proposed that holds that the development of intersensory integration consists of the heterochronous emergence of heterogeneous perceptual skills.
Collapse
Affiliation(s)
- David J Lewkowicz
- New York State Institute for Basic Research in Developmental Disabilities, 1050 Forest Hill Road, Staten Island, New York, NY 10314, USA.
| |
Collapse
|