1
|
Vannasing P, Dionne-Dostie E, Tremblay J, Paquette N, Collignon O, Gallagher A. Electrophysiological responses of audiovisual integration from infancy to adulthood. Brain Cogn 2024; 178:106180. [PMID: 38815526 DOI: 10.1016/j.bandc.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.
Collapse
Affiliation(s)
- Phetsamone Vannasing
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Emmanuelle Dionne-Dostie
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Julie Tremblay
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Natacha Paquette
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Olivier Collignon
- Institute of Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-La-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Anne Gallagher
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada; Cerebrum, Department of Psychology, University of Montreal, Montreal, Qc, Canada.
| |
Collapse
|
2
|
Neel ML, Jeanvoine A, Key A, Stark AR, Norton ES, Relland LM, Hay K, Maitre NL. Behavioral and neural measures of infant responsivity increase with maternal multisensory input in non-irritable infants. Brain Behav 2023; 13:e3253. [PMID: 37786238 PMCID: PMC10636412 DOI: 10.1002/brb3.3253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 08/29/2023] [Accepted: 09/06/2023] [Indexed: 10/04/2023] Open
Abstract
INTRODUCTION Parents often use sensory stimulation during early-life interactions with infants. These interactions, including gazing, rocking, or singing, scaffold child development. Previous studies have examined infant neural processing during highly controlled sensory stimulus presentation paradigms. OBJECTIVE In this study, we investigated infant behavioral and neural responsiveness during a mother-child social interaction during which the mother provided infant stimulation with a progressive increase in the number of sensory modalities. METHODS We prospectively collected and analyzed video-coded behavioral interactions and electroencephalogram (EEG) frontal asymmetry (FAS) from infants (n = 60) at 2-4 months born at ≥ 34 weeks gestation. As the number of sensory modalities progressively increased during the interaction, infant behaviors of emotional connection in facial expressiveness, sensitivity to mother, and vocal communication increased significantly. Conversely, infant FAS for the entire cohort did not change significantly. However, when we accounted for infant irritability, both video-coded behaviors and EEG FAS markers of infant responsiveness increased across the interaction in the non-irritable infants. The non-irritable infants (49%) demonstrated positive FAS, indicating readiness to engage with, rather than to withdraw from, multisensory but not unisensory interactions with their mothers. RESULTS These results suggest that multisensory input from mothers is associated with greater infant neural approach state and highlight the importance of infant behavioral state during neural measures of infant responsiveness.
Collapse
Affiliation(s)
- Mary Lauren Neel
- Department of Pediatrics & NeonatologyEmory University School of Medicine & Children's Healthcare of AtlantaAtlanta, GAUSA
| | - Arnaud Jeanvoine
- The Abigail Wexner Research Institute at Nationwide Children's HospitalColumbus, OHUSA
| | | | - Ann R. Stark
- Department of Pediatrics & NeonatologyBeth Israel Deaconess Medical Center & Harvard Medical SchoolBoston, MAUSA
| | | | - Lance M. Relland
- The Abigail Wexner Research Institute at Nationwide Children's HospitalColumbus, OHUSA
- Department of Anesthesiology & Pain MedicineNationwide Children's Hospital & The Ohio State UniversityColumbus, OHUSA
| | - Krystal Hay
- The Abigail Wexner Research Institute at Nationwide Children's HospitalColumbus, OHUSA
| | - Nathalie L. Maitre
- Department of Pediatrics & NeonatologyEmory University School of Medicine & Children's Healthcare of AtlantaAtlanta, GAUSA
| |
Collapse
|
3
|
Bursalıoğlu A, Michalak A, Guy MW. Intersensory redundancy impedes face recognition in 12-month-old infants. Front Psychol 2023; 14:1210132. [PMID: 37529309 PMCID: PMC10389088 DOI: 10.3389/fpsyg.2023.1210132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 06/26/2023] [Indexed: 08/03/2023] Open
Abstract
This study examined the role of intersensory redundancy on 12-month-old infants' attention to and processing of face stimuli. Two experiments were conducted. In Experiment 1, 72 12-month-olds were tested using an online platform called Lookit. Infants were familiarized with two videos of an actor reciting a children's story presented simultaneously. A soundtrack either matched one of the videos (experimental condition) or neither of the videos (control condition). Visual-paired comparison (VPC) trials were completed to measure looking preferences for the faces presented synchronously and asynchronously during familiarization and for novel faces. Neither group displayed looking preferences during the VPC trials. It is possible that the complexity of the familiarization phase made the modality-specific face properties (i.e., facial characteristics and configuration) difficult to process. In Experiment 2, 56 12-month-old infants were familiarized with the video of only one actor presented either synchronously or asynchronously with the soundtrack. Following familiarization, participants completed a VPC procedure including the familiar face and a novel face. Results from Experiment 2 showed that infants in the synchronous condition paid more attention during familiarization than infants in the asynchronous condition. Infants in the asynchronous condition demonstrated recognition of the familiar face. These findings suggest that the competing face stimuli in the Experiment 1 were too complex for the facial characteristics to be processed. The procedure in Experiment 2 led to increased processing of the face in the asynchronous presentation. These results indicate that intersensory redundancy in the presentation of synchronous audiovisual faces is very salient, discouraging the processing of modality-specific visual properties. This research contributes to the understanding of face processing in multimodal contexts, which have been understudied, although a great deal of naturalistic face exposure occurs multimodally.
Collapse
Affiliation(s)
- Aslı Bursalıoğlu
- Department of Psychology, Loyola University Chicago, Chicago, IL, United States
| | | | | |
Collapse
|
4
|
Lickliter R, Bahrick LE, Vaillant-Mekras J. The role of task difficulty in directing selective attention in bobwhite quail (Colinus virginianus) neonates: A developmental test of the intersensory redundancy hypothesis. Dev Psychobiol 2023; 65:e22381. [PMID: 36946684 DOI: 10.1002/dev.22381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 02/24/2023]
Abstract
The dynamics of selective attention necessarily influences the course of early perceptual development. The intersensory redundancy hypothesis proposes that in early development information presented redundantly across two or more senses selectively recruits attention to the amodal properties of an object or event. In contrast, information presented to a single sense enhances attention to modality-specific properties. The present study assessed the second of these predictions in neonatal bobwhite quail (Colinus virginianus), with a focus on the role of task difficulty in directing selective attention. In Experiment 1, we exposed quail chicks to unimodal auditory, nonredundant audiovisual, or redundant audiovisual presentations of a bobwhite maternal call paired with a pulsing light for 10 min/h on the day following hatching. Chicks were subsequently individually tested 24 h later for their unimodal auditory preference between the familiarized maternal call and the same call with pitch altered by two steps. Chicks from all experimental groups preferred the familiarized maternal call over the altered maternal call. In Experiment 2, we repeated the exposure conditions of Experiment 1, but presented a more difficult task by narrowing the pitch range between the two maternal calls during testing. Chicks in the unimodal auditory and nonredundant audiovisual conditions preferred the familiarized call, whereas chicks in the redundant audiovisual exposure group showed no detection of the pitch change. Our results indicate that early discrimination of pitch change is disrupted by intersensory redundancy under difficult but not easy task conditions. These findings, along with findings from human infants, highlight the role of task difficulty in shifting attentional selectivity and underscore the dynamic nature of neonatal attentional salience hierarchies.
Collapse
Affiliation(s)
- Robert Lickliter
- Department of Psychology, Florida International University, Miami, Florida, USA
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, Miami, Florida, USA
| | | |
Collapse
|
5
|
López-Arango G, Deguire F, Agbogba K, Boucher MA, Knoth IS, El-Jalbout R, Côté V, Damphousse A, Kadoury S, Lippé S. Impact of brain overgrowth on sensorial learning processing during the first year of life. Front Hum Neurosci 2022; 16:928543. [PMID: 35927999 PMCID: PMC9344916 DOI: 10.3389/fnhum.2022.928543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 06/29/2022] [Indexed: 11/13/2022] Open
Abstract
Macrocephaly is present in about 2–5% of the general population. It can be found as an isolated benign trait or as part of a syndromic condition. Brain overgrowth has been associated with neurodevelopmental disorders such as autism during the first year of life, however, evidence remains inconclusive. Furthermore, most of the studies have involved pathological or high-risk populations, but little is known about the effects of brain overgrowth on neurodevelopment in otherwise neurotypical infants. We investigated the impact of brain overgrowth on basic perceptual learning processes (repetition effects and change detection response) during the first year of life. We recorded high density electroencephalograms (EEG) in 116 full-term healthy infants aged between 3 and 11 months, 35 macrocephalic (14 girls) and 81 normocephalic (39 girls) classified according to the WHO head circumference norms. We used an adapted oddball paradigm, time-frequency analyses, and auditory event-related brain potentials (ERPs) to investigate differences between groups. We show that brain overgrowth has a significant impact on repetition effects and change detection response in the 10–20 Hz frequency band, and in N450 latency, suggesting that these correlates of sensorial learning processes are sensitive to brain overgrowth during the first year of life.
Collapse
Affiliation(s)
- Gabriela López-Arango
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
- Department of Neurosciences, Montreal University, Montreal, QC, Canada
- *Correspondence: Gabriela López-Arango,
| | - Florence Deguire
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
- Department of Psychology, Montreal University, Montreal, QC, Canada
| | - Kristian Agbogba
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
- Polytechnique Montreal, Montreal, QC, Canada
| | | | - Inga S. Knoth
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
| | - Ramy El-Jalbout
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
- Department of Medical Imaging, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
| | - Valérie Côté
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
| | - Amélie Damphousse
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
- Department of Medical Imaging, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
| | | | - Sarah Lippé
- Research Center, Sainte-Justine Hospital, Montreal University, Montreal, QC, Canada
- Department of Psychology, Montreal University, Montreal, QC, Canada
- Sarah Lippé,
| |
Collapse
|
6
|
Jessica Tan SH, Kalashnikova M, Di Liberto GM, Crosse MJ, Burnham D. Seeing a Talking Face Matters: The Relationship between Cortical Tracking of Continuous Auditory-Visual Speech and Gaze Behaviour in Infants, Children and Adults. Neuroimage 2022; 256:119217. [PMID: 35436614 DOI: 10.1016/j.neuroimage.2022.119217] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 04/09/2022] [Accepted: 04/14/2022] [Indexed: 11/24/2022] Open
Abstract
An auditory-visual speech benefit, the benefit that visual speech cues bring to auditory speech perception, is experienced from early on in infancy and continues to be experienced to an increasing degree with age. While there is both behavioural and neurophysiological evidence for children and adults, only behavioural evidence exists for infants - as no neurophysiological study has provided a comprehensive examination of the auditory-visual speech benefit in infants. It is also surprising that most studies on auditory-visual speech benefit do not concurrently report looking behaviour especially since the auditory-visual speech benefit rests on the assumption that listeners attend to a speaker's talking face and that there are meaningful individual differences in looking behaviour. To address these gaps, we simultaneously recorded electroencephalographic (EEG) and eye-tracking data of 5-month-olds, 4-year-olds and adults as they were presented with a speaker in auditory-only (AO), visual-only (VO), and auditory-visual (AV) modes. Cortical tracking analyses that involved forward encoding models of the speech envelope revealed that there was an auditory-visual speech benefit [i.e., AV > (A+V)], evident in 5-month-olds and adults but not 4-year-olds. Examination of cortical tracking accuracy in relation to looking behaviour, showed that infants' relative attention to the speaker's mouth (vs. eyes) was positively correlated with cortical tracking accuracy of VO speech, whereas adults' attention to the display overall was negatively correlated with cortical tracking accuracy of VO speech. This study provides the first neurophysiological evidence of auditory-visual speech benefit in infants and our results suggest ways in which current models of speech processing can be fine-tuned.
Collapse
Affiliation(s)
- S H Jessica Tan
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University.
| | - Marina Kalashnikova
- The Basque Center on Cognition, Brain and Language; IKERBASQUE, Basque Foundation for Science
| | | | - Michael J Crosse
- Trinity Center for Biomedical Engineering, Department of Mechanical, Manufacturing & Biomedical Engineering, Trinity College Dublin, Dublin, Ireland
| | - Denis Burnham
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University
| |
Collapse
|
7
|
Zhou HY, Yang HX, Wei Z, Wan GB, Lui SSY, Chan RCK. Audiovisual synchrony detection for fluent speech in early childhood: An eye-tracking study. Psych J 2022; 11:409-418. [PMID: 35350086 DOI: 10.1002/pchj.538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/09/2022] [Accepted: 02/17/2022] [Indexed: 11/05/2022]
Abstract
During childhood, the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables. However, little is known about how children perceive synchrony for natural and continuous speech. This study investigated young children's gaze patterns while they were watching movies of two identical speakers telling stories side by side. Only one speaker's lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms. Children aged 3-6 years (n = 94, 52.13% males) showed an overall preference for the synchronous speaker, with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages. However, viewing time to the synchronous speech was significantly longer in the auditory-leading (AL) condition compared with that in the visual-leading (VL) condition, suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood. When further examining gaze patterns on dynamic faces, we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos. Attention to detail, one dimension of autistic traits featured by local processing, has been found to be correlated with worse performances in speech synchrony processing. These findings extended previous research by showing the development of speech synchrony perception in young children, and may have implications for clinical populations (e.g., autism) with impaired multisensory integration.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Wei
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Guo-Bin Wan
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Simon S Y Lui
- Department of Psychiatry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Fiber tracing and microstructural characterization among audiovisual integration brain regions in neonates compared with young adults. Neuroimage 2022; 254:119141. [PMID: 35342006 DOI: 10.1016/j.neuroimage.2022.119141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/23/2022] [Accepted: 03/21/2022] [Indexed: 11/23/2022] Open
Abstract
Audiovisual integration has been related with cognitive-processing and behavioral advantages, as well as with various socio-cognitive disorders. While some studies have identified brain regions instantiating this ability shortly after birth, little is known about the structural pathways connecting them. The goal of the present study was to reconstruct fiber tracts linking AVI regions in the newborn in-vivo brain and assess their adult-likeness by comparing them with analogous fiber tracts of young adults. We performed probabilistic tractography and compared connective probabilities between a sample of term-born neonates (N = 311; the Developing Human Connectome Project (dHCP, http://www.developingconnectome.org) and young adults (N = 311 The Human Connectome Project; https://www.humanconnectome.org/) by means of a classification algorithm. Furthermore, we computed Dice coefficients to assess between-group spatial similarity of the reconstructed fibers and used diffusion metrics to characterize neonates' AVI brain network in terms of microstructural properties, interhemispheric differences and the association with perinatal covariates and biological sex. Overall, our results indicate that the AVI fiber bundles were successfully reconstructed in a vast majority of neonates, similarly to adults. Connective probability distributional similarities and spatial overlaps of AVI fibers between the two groups differed across the reconstructed fibers. There was a rank-order correspondence of the fibers' connective strengths across the groups. Additionally, the study revealed patterns of diffusion metrics in line with early white matter developmental trajectories and a developmental advantage for females. Altogether, these findings deliver evidence of meaningful structural connections among AVI regions in the newborn in-vivo brain.
Collapse
|
9
|
Kadlaskar G, Bergmann S, McNally Keehn R, Seidl A, Keehn B. Electrophysiological Measures of Tactile and Auditory Processing in Children With Autism Spectrum Disorder. Front Hum Neurosci 2022; 15:729270. [PMID: 35002650 PMCID: PMC8733620 DOI: 10.3389/fnhum.2021.729270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 12/07/2021] [Indexed: 12/12/2022] Open
Abstract
Behavioral differences in responding to tactile and auditory stimuli are widely reported in individuals with autism spectrum disorder (ASD). However, the neural mechanisms underlying distinct tactile and auditory reactivity patterns in ASD remain unclear with theories implicating differences in both perceptual and attentional processes. The current study sought to investigate (1) the neural indices of early perceptual and later attentional factors underlying tactile and auditory processing in children with and without ASD, and (2) the relationship between neural indices of tactile and auditory processing and ASD symptomatology. Participants included 14, 6–12-year-olds with ASD and 14 age- and non-verbal IQ matched typically developing (TD) children. Children participated in an event-related potential (ERP) oddball paradigm during which they watched a silent video while being presented with tactile and auditory stimuli (i.e., 80% standard speech sound/a/; 10% oddball speech sound/i/; 10% novel vibrotactile stimuli on the fingertip with standard speech sound/a/). Children’s early and later ERP responses to tactile (P1 and N2) and auditory stimuli (P1, P3a, and P3b) were examined. Non-parametric analyses showed that children with ASD displayed differences in early perceptual processing of auditory (i.e., lower amplitudes at central region of interest), but not tactile, stimuli. Analysis of later attentional components did not show differences in response to tactile and auditory stimuli in the ASD and TD groups. Together, these results suggest that differences in auditory responsivity patterns could be related to perceptual factors in children with ASD. However, despite differences in caregiver-reported sensory measures, children with ASD did not differ in their neural reactivity to infrequent touch-speech stimuli compared to TD children. Nevertheless, correlational analyses confirmed that inter-individual differences in neural responsivity to tactile and auditory stimuli were related to social skills in all children. Finally, we discuss how the paradigm and stimulus type used in the current study may have impacted our results. These findings have implications for everyday life, where individual differences in responding to tactile and auditory stimuli may impact social functioning.
Collapse
Affiliation(s)
- Girija Kadlaskar
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | - Sophia Bergmann
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | - Rebecca McNally Keehn
- Department of Pediatrics, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | - Brandon Keehn
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States.,Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
10
|
Lalonde K, Werner LA. Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit. Brain Sci 2021; 11:49. [PMID: 33466253 PMCID: PMC7824772 DOI: 10.3390/brainsci11010049] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 12/30/2020] [Accepted: 12/30/2020] [Indexed: 02/07/2023] Open
Abstract
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants' and children's use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE 68131, USA
| | - Lynne A. Werner
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA;
| |
Collapse
|
11
|
Rodrigo MJ, Muñetón-Ayala M, de Vega M. Exploring the Co-occurrence of Manual Verbs and Actions in Early Mother-Child Communication. Front Psychol 2020; 11:596080. [PMID: 33240185 PMCID: PMC7683411 DOI: 10.3389/fpsyg.2020.596080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 10/21/2020] [Indexed: 11/23/2022] Open
Abstract
The embodiment approach has shown that motor neural networks are involved in the processing of action verbs. There is developmental evidence that embodied effects on verb processing are already present in early years. Yet, the ontogenetic origin of this motor reuse in action verbs remains unknown. This longitudinal study investigates the co-occurrence of manual verbs and actions during mother-child daily routines (free play, bathing, and dining) when children were 1 to 2 (Group 1) and 2 to 3 (Group 2) years old. Eight mother-child dyads were video-recorded in 3-month intervals across 12 months (27 recording hours), and the timing of verbs and manual actions (21,876 entries) were coded by independent observers. Results showed that the probability of matched verb-action co-occurrences were much higher (0.80 and 0.77) than that of random co-occurrences (0.13 and 0.15) for Group 1 and Group 2, respectively. The distributions of the verb-action temporal intervals in both groups were quite symmetrical and skewed with the peak corresponding to both 0.00 s synchronic intervals (8% of the cases) and the shortest +5 s interval (40% of the cases). Mother-led instances occurred in both groups whereas child-led instances were restricted to Group 2. Mothers pragmatically aligned their verbal productions, since they repeatedly used (74%) those verbs they shared with their children's repertoire (31%). In conclusion, the early multisensory communicative and manipulative scene affords grounding of verb meanings on the ongoing actions, facilitating verb-action pairing in the realm of social interactions, providing a new dimension to the prevailing solipsistic approach to embodiment.
Collapse
Affiliation(s)
- María José Rodrigo
- Facultad de Psicología, Universidad de La Laguna, San Cristóbal de La Laguna, Spain
| | | | - Manuel de Vega
- Instituto Universitario de Neurociencias, Universidad de La Laguna, San Cristóbal de La Laguna, Spain
| |
Collapse
|
12
|
Shic F, Wang Q, Macari SL, Chawarska K. The role of limited salience of speech in selective attention to faces in toddlers with autism spectrum disorders. J Child Psychol Psychiatry 2020; 61:459-469. [PMID: 31471912 PMCID: PMC7048639 DOI: 10.1111/jcpp.13118] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/15/2019] [Indexed: 12/31/2022]
Abstract
BACKGROUND Impaired attention to faces of interactive partners is a marker for autism spectrum disorder (ASD) in early childhood. However, it is unclear whether children with ASD avoid faces or find them less salient and whether the phenomenon is linked with the presence of eye contact or speech. METHODS We investigated the impacts of speech (SP) and direct gaze (DG) on attention to faces in 22-month-old toddlers with ASD (n = 50) and typically developing controls (TD, n = 47) using the Selective Social Attention 2.0 (SSA 2.0) task. The task consisted of four conditions where the presence (+) and absence (-) of DG and SP were systematically manipulated. The severity of autism symptoms, and verbal and nonverbal skills were characterized concurrently with eye tracking at 22.4 (SD = 3.2) months and prospectively at 39.8 (SD = 4.3) months. RESULTS Toddlers with ASD looked less than TD toddlers at face and mouth regions only when the actress was speaking (direct gaze absence with speech, DG-SP+: d = 0.99, p < .001 for face, d = 0.98, p < .001 for mouth regions; direct gaze present with speech, DG+SP+, d = 1.47, p < .001 for face, d = 1.01, p < .001 for mouth regions). Toddlers with ASD looked less at the eye region only when both gaze and speech cues were present (d = 0.46, p = .03). Salience of the combined DG and SP cues was associated concurrently and prospectively with the severity of autism symptoms, and the association remained significant after controlling for verbal and nonverbal levels. CONCLUSIONS The study links poor attention to faces with limited salience of audiovisual speech and provides no support for the face avoidance hypothesis in the early stages of ASD. These results are consequential for research on early discriminant and predictive biomarkers as well as identification of novel treatment targets.
Collapse
Affiliation(s)
- Frederick Shic
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
- Seattle Children’s Research Institute, Center for Child Health, Behavior and Development; 2001 8 Ave Ste 400; Seattle, WA 98121
- Univeristy of Washington School of Medicine, Department of Pediatrics; 2001 8 Ave Ste 400; Seattle, WA 98121
| | - Quan Wang
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
| | - Suzanne L. Macari
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
| | - Katarzyna Chawarska
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
| |
Collapse
|
13
|
Suppanen E, Huotilainen M, Ylinen S. Rhythmic structure facilitates learning from auditory input in newborn infants. Infant Behav Dev 2019; 57:101346. [DOI: 10.1016/j.infbeh.2019.101346] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 02/01/2023]
|
14
|
Lalonde K, Werner LA. Infants and Adults Use Visual Cues to Improve Detection and Discrimination of Speech in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3860-3875. [PMID: 31618097 PMCID: PMC7201336 DOI: 10.1044/2019_jslhr-h-19-0106] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 05/30/2019] [Accepted: 07/08/2019] [Indexed: 06/10/2023]
Abstract
Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1-3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset-offset cue for detection, but the same cue did not improve their discrimination. The onset-offset cue benefited infants for both detection and discrimination. Whereas the onset-offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset-offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| | - Lynne A. Werner
- Department of Speech & Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
15
|
Kayhan E, Meyer M, O'Reilly JX, Hunnius S, Bekkering H. Nine-month-old infants update their predictive models of a changing environment. Dev Cogn Neurosci 2019; 38:100680. [PMID: 31357079 PMCID: PMC6969335 DOI: 10.1016/j.dcn.2019.100680] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 02/15/2019] [Accepted: 07/01/2019] [Indexed: 11/18/2022] Open
Abstract
Humans generate internal models of their environment to predict events in the world. As the environments change, our brains adjust to these changes by updating their internal models. Here, we investigated whether and how 9-month-old infants differentially update their models to represent a dynamic environment. Infants observed a predictable sequence of stimuli, which were interrupted by two types of cues. Following the update cue, the pattern was altered, thus, infants were expected to update their predictions for the upcoming stimuli. Because the pattern remained the same after the no-update cue, no subsequent updating was required. Infants showed an amplified negative central (Nc) response when the predictable sequence was interrupted. Late components such as the PSW were also evoked in response to unexpected stimuli; however, we found no evidence for a differential response to the informational value of surprising cues at later stages of processing. Infants rather learned that surprising cues always signal a change in the environment that requires updating. Interestingly, infants responded with an amplified neural response to the absence of an expected change, suggesting a top-down modulation of early sensory processing in infants. Our findings corroborate emerging evidence showing that infants build predictive models early in life.
Collapse
Affiliation(s)
- E Kayhan
- University of Potsdam, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Germany.
| | - M Meyer
- Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| | - J X O'Reilly
- Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| | - S Hunnius
- Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| | - H Bekkering
- Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| |
Collapse
|
16
|
That does not sound right: Sounds affect visual ERPs during a piano sight-reading task. Behav Brain Res 2019; 367:1-9. [DOI: 10.1016/j.bbr.2019.03.037] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/05/2019] [Accepted: 03/21/2019] [Indexed: 11/20/2022]
|
17
|
Dixon KC, Reynolds GD, Romano AC, Roth KC, Stumpe AL, Guy MW, Mosteller SM. Neural correlates of individuation and categorization of other-species faces in infancy. Neuropsychologia 2019; 126:27-35. [PMID: 28986267 PMCID: PMC5882603 DOI: 10.1016/j.neuropsychologia.2017.09.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Revised: 09/18/2017] [Accepted: 09/30/2017] [Indexed: 11/24/2022]
Abstract
The goal of this study was to investigate 9-month-old infants' ability to individuate and categorize other-species faces at the subordinate level. We were also interested in examining the effects of initial exposure conditions on infant categorization and individuation processes. Infants were either familiarized with a single monkey face in an individuation procedure or familiarized with multiple exemplars of monkey faces from the same species in a categorization procedure. Event-related potentials were recorded while the infants were presented: familiar faces, novel faces from the familiar species, or novel faces from a novel species. The categorization group categorized monkey faces by species at the subordinate level, whereas the individuation group did not discriminate monkey faces at the individual or subordinate level. These findings indicate initial exposure to multiple exemplars facilitates infant processing of other-species faces, and infants are efficient at subordinate-level categorization at 9 months of age.
Collapse
Affiliation(s)
- Kate C Dixon
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA; University of Louisville, Department of Psychological and Brain Sciences, Louisville, KY 40292, USA.
| | - Greg D Reynolds
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA.
| | - Alexandra C Romano
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA.
| | - Kelly C Roth
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA.
| | - Alexa L Stumpe
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA.
| | - Maggie W Guy
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA; University of South Carolina, Department of Psychology, Columbia, SC 29208, USA.
| | - Sara M Mosteller
- University of Tennessee, Department of Psychology, Knoxville, TN 37996, USA; University of East Anglia, School of Psychology, Norwich NR4 7TJ, United Kingdom.
| |
Collapse
|
18
|
Curtindale LM, Bahrick LE, Lickliter R, Colombo J. Effects of multimodal synchrony on infant attention and heart rate during events with social and nonsocial stimuli. J Exp Child Psychol 2019; 178:283-294. [PMID: 30445204 PMCID: PMC6980371 DOI: 10.1016/j.jecp.2018.10.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 09/16/2018] [Accepted: 10/16/2018] [Indexed: 11/25/2022]
Abstract
Attention is a state of readiness or alertness, associated with behavioral and psychophysiological responses, that facilitates learning and memory. Multisensory and dynamic events have been shown to elicit more attention and produce greater sustained attention in infants than auditory or visual events alone. Such redundant and often temporally synchronous information guides selectivity and facilitates perception, learning, and memory of properties of events specified by redundancy. In addition, events involving faces or other social stimuli provide an extraordinary amount of redundant information that attracts and sustains attention. In the current study, 4- and 8-month-old infants were shown 2-min multimodal videos featuring social or nonsocial stimuli to determine the relative roles of synchrony and stimulus category in inducing attention. Behavioral measures included average looking time and peak look duration, and convergent measurement of heart rate (HR) allowed for the calculation of HR-defined phases of attention: Orienting (OR), sustained attention (SA), and attention termination (AT). The synchronous condition produced an earlier onset of SA (less time in OR) and a deeper state of SA than the asynchronous condition. Social stimuli attracted and held attention (longer duration of peak looks and lower HR than nonsocial stimuli). Effects of synchrony and the social nature of stimuli were additive, suggesting independence of their influence on attention. These findings are the first to demonstrate different HR-defined phases of attention as a function of intersensory redundancy, suggesting greater salience and deeper processing of naturalistic synchronous audiovisual events compared with asynchronous ones.
Collapse
Affiliation(s)
- Lori M Curtindale
- Department of Psychology, East Carolina University, Greenville, NC 27858, USA.
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - Robert Lickliter
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - John Colombo
- Department of Psychology, University of Kansas, Lawrence, KS 66045, USA
| |
Collapse
|
19
|
Werchan DM, Baumgartner HA, Lewkowicz DJ, Amso D. The origins of cortical multisensory dynamics: Evidence from human infants. Dev Cogn Neurosci 2018; 34:75-81. [PMID: 30099263 PMCID: PMC6629259 DOI: 10.1016/j.dcn.2018.07.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 07/03/2018] [Accepted: 07/13/2018] [Indexed: 12/15/2022] Open
Abstract
Classic views of multisensory processing suggest that cortical sensory regions are specialized. More recent views argue that cortical sensory regions are inherently multisensory. To date, there are no published neuroimaging data that directly test these claims in infancy. Here we used fNIRS to show that temporal and occipital cortex are functionally coupled in 3.5-5-month-old infants (N = 65), and that the extent of this coupling during a synchronous, but not an asynchronous, audiovisual event predicted whether occipital cortex would subsequently respond to sound-only information. These data suggest that multisensory experience may shape cortical dynamics to adapt to the ubiquity of synchronous multisensory information in the environment, and invoke the possibility that adaptation to the environment can also reflect broadening of the computational range of sensory systems.
Collapse
Affiliation(s)
- Denise M Werchan
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer St. Providence, RI, 02912, United States
| | - Heidi A Baumgartner
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer St. Providence, RI, 02912, United States
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, 360 Huntington Ave., Boston, MA, 02115, United States
| | - Dima Amso
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer St. Providence, RI, 02912, United States.
| |
Collapse
|
20
|
Altvater-Mackensen N, Grossmann T. Modality-independent recruitment of inferior frontal cortex during speech processing in human infants. Dev Cogn Neurosci 2018; 34:130-138. [PMID: 30391756 PMCID: PMC6969291 DOI: 10.1016/j.dcn.2018.10.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 08/25/2018] [Accepted: 10/25/2018] [Indexed: 11/22/2022] Open
Abstract
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS). Our results revealed that infants recruit speech-sensitive regions in frontal cortex including IFC regardless of whether they processed unimodal or multimodal speech. We argue that IFC may play an important role in associating multimodal speech information during the early steps of language learning.
Collapse
Affiliation(s)
- Nicole Altvater-Mackensen
- Department of Psychology, Johannes-Gutenberg-University Mainz, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tobias Grossmann
- Department of Psychology, University of Virginia, USA; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
21
|
Dibavar MR. Infants' intermodal numerical knowledge. Infant Behav Dev 2018; 52:32-44. [PMID: 29807236 DOI: 10.1016/j.infbeh.2018.04.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 04/28/2018] [Accepted: 04/30/2018] [Indexed: 11/28/2022]
Abstract
Two-system theory as the dominant approach in the field of infant numerical representation is characterized by three features: precise representation of small sets of objects, approximate representation of large magnitudes and failure to compare small and large sets. Comparison of single- and multimodal numerical abilities suggests that infants' performance in multimodal conditions is consistent with these three features. Nevertheless, the influence of multimodal stimulation on infants' numerical representation is characterized by preventing the formation of perceptual overlaps across different sensory modalities which can lead to an understanding of numerical values of small sets and also by creating a conceptual overlap about numbers that increases infants' accuracy for discriminating quantities when numerical information is presented bimodally and synchronously. Such multisensory benefits provide numerical capabilities beyond what is depicted by the two-system view.
Collapse
|
22
|
Reynolds GD, Roth KC. The Development of Attentional Biases for Faces in Infancy: A Developmental Systems Perspective. Front Psychol 2018; 9:222. [PMID: 29541043 PMCID: PMC5835799 DOI: 10.3389/fpsyg.2018.00222] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Accepted: 02/09/2018] [Indexed: 11/16/2022] Open
Abstract
We present an integrative review of research and theory on major factors involved in the early development of attentional biases to faces. Research utilizing behavioral, eye-tracking, and neuroscience measures with infant participants as well as comparative research with animal subjects are reviewed. We begin with coverage of research demonstrating the presence of an attentional bias for faces shortly after birth, such as newborn infants' visual preference for face-like over non-face stimuli. The role of experience and the process of perceptual narrowing in face processing are examined as infants begin to demonstrate enhanced behavioral and neural responsiveness to mother over stranger, female over male, own- over other-race, and native over non-native faces. Next, we cover research on developmental change in infants' neural responsiveness to faces in multimodal contexts, such as audiovisual speech. We also explore the potential influence of arousal and attention on early perceptual preferences for faces. Lastly, the potential influence of the development of attention systems in the brain on social-cognitive processing is discussed. In conclusion, we interpret the findings under the framework of Developmental Systems Theory, emphasizing the combined and distributed influence of several factors, both internal (e.g., arousal, neural development) and external (e.g., early social experience) to the developing child, in the emergence of attentional biases that lead to enhanced responsiveness and processing of faces commonly encountered in the native environment.
Collapse
Affiliation(s)
- Greg D. Reynolds
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, United States
| | | |
Collapse
|
23
|
Lickliter R, Bahrick LE, Vaillant-Mekras J. The intersensory redundancy hypothesis: Extending the principle of unimodal facilitation to prenatal development. Dev Psychobiol 2017; 59:910-915. [PMID: 28833041 PMCID: PMC5630509 DOI: 10.1002/dev.21551] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 07/10/2017] [Indexed: 11/06/2022]
Abstract
Selective attention to different properties of stimulation provides the foundation for perception, learning, and memory. The Intersensory Redundancy Hypothesis (IRH) proposes that early in development information presented redundantly across two or more modalities (multimodal) selectively recruits attention to and enhances perceptual learning of amodal properties, whereas information presented to a single sense modality (unimodal) enhances perceptual learning of modality-specific properties. The present study is the first to assess this principle of unimodal facilitation in non-human animals in prenatal development. We assessed bobwhite quail embryos' prenatal detection of pitch, a modality-specific property, under conditions of unimodal and bimodal (synchronous or asynchronous) exposure. Chicks exposed to prenatal unimodal auditory stimulation or asynchronous bimodal (audiovisual) stimulation preferred the familiarized maternal call over a novel pitch-modified maternal call following hatching, whereas chicks exposed to redundant (synchronous) audiovisual stimulation failed to prefer the familiar call over the pitch-modified call. These results provide further evidence that selective attention is recruited to specific stimulus properties of events in early development and that these biases are evident even during the prenatal period.
Collapse
|
24
|
Gogate L. Development of Early Multisensory Perception and Communication: From Environmental and Behavioral to Neural Signatures. Dev Neuropsychol 2017; 41:269-272. [PMID: 28253037 DOI: 10.1080/87565641.2017.1279429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Lakshmi Gogate
- a Department of Communication Sciences and Disorders University of Missouri , Columbia , Missouri
| |
Collapse
|
25
|
Gogate L, Hollich G. Early Verb-Action and Noun-Object Mapping Across Sensory Modalities: A Neuro-Developmental View. Dev Neuropsychol 2017; 41:293-307. [PMID: 28059566 DOI: 10.1080/87565641.2016.1243112] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The authors provide an alternative to the traditional view that verbs are harder to learn than nouns by reviewing three lines of behavioral and neurophysiological evidence in word-mapping development across cultures. First, preverbal infants tune into word-action and word-object pairings using domain-general mechanisms. Second, while post-verbal infants from noun-friendly language environments experience verb-action mapping difficulty, infants from verb-friendly language environments do not. Third, children use language-specific conventions to learn all types of words, although still strongly influenced by their language environment. Additionally, the authors suggest neurophysiological research to advance these lines of evidence beyond traditional views of word learning.
Collapse
Affiliation(s)
- Lakshmi Gogate
- a Communication Sciences and Disorders , University of Missouri-Columbia , Columbia , Missouri
| | - George Hollich
- b Psychological Sciences , Purdue University , West Lafayette , Indiana
| |
Collapse
|
26
|
Bahrick LE, Todd JT, Castellanos I, Sorondo BM. Enhanced attention to speaking faces versus other event types emerges gradually across infancy. Dev Psychol 2016; 52:1705-1720. [PMID: 27786526 PMCID: PMC5291072 DOI: 10.1037/dev0000157] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The development of attention to dynamic faces versus objects providing synchronous audiovisual versus silent visual stimulation was assessed in a large sample of infants. Maintaining attention to the faces and voices of people speaking is critical for perceptual, cognitive, social, and language development. However, no studies have systematically assessed when, if, or how attention to speaking faces emerges and changes across infancy. Two measures of attention maintenance, habituation time (HT) and look-away rate (LAR), were derived from cross-sectional data of 2- to 8-month-old infants (N = 801). Results indicated that attention to audiovisual faces and voices was maintained across age, whereas attention to each of the other event types (audiovisual objects, silent dynamic faces, silent dynamic objects) declined across age. This reveals a gradually emerging advantage in attention maintenance (longer HTs, lower LARs) for audiovisual speaking faces compared with the other 3 event types. At 2 months, infants showed no attentional advantage for faces (with greater attention to audiovisual than to visual events); at 3 months, they attended more to dynamic faces than objects (in the presence or absence of voices), and by 4 to 5 and 6 to 8 months, significantly greater attention emerged to temporally coordinated faces and voices of people speaking compared with all other event types. Our results indicate that selective attention to coordinated faces and voices over other event types emerges gradually across infancy, likely as a function of experience with multimodal, redundant stimulation from person and object events. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Irina Castellanos
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, Columbus, OH
| | - Barbara M. Sorondo
- Flordia International University Libraries, Florida International University, Miami, FL
| |
Collapse
|
27
|
Suanda SH, Smith LB, Yu C. The Multisensory Nature of Verbal Discourse in Parent-Toddler Interactions. Dev Neuropsychol 2016; 41:324-341. [PMID: 28128992 PMCID: PMC7263485 DOI: 10.1080/87565641.2016.1256403] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Toddlers learn object names in sensory rich contexts. Many argue that this multisensory experience facilitates learning. Here, we examine how toddlers' multisensory experience is linked to another aspect of their experience associated with better learning: the temporally extended nature of verbal discourse. We observed parent-toddler dyads as they played with, and as parents talked about, a set of objects. Analyses revealed links between the multisensory and extended nature of speech, highlighting inter-connections and redundancies in the environment. We discuss the implications of these results for our understanding of early discourse, multisensory communication, and how the learning environment shapes language development.
Collapse
Affiliation(s)
- Sumarga H Suanda
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| | - Linda B Smith
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| | - Chen Yu
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| |
Collapse
|
28
|
Williams JL, Corbetta D. Assessing the Impact of Movement Consequences on the Development of Early Reaching in Infancy. Front Psychol 2016; 7:587. [PMID: 27199822 PMCID: PMC4846662 DOI: 10.3389/fpsyg.2016.00587] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2015] [Accepted: 04/08/2016] [Indexed: 11/13/2022] Open
Abstract
Prior research on infant reaching has shown that providing infants with repeated opportunities to reach for objects aids the emergence and progression of reaching behavior. This study investigated the effect of movement consequences on the process of learning to reach in pre-reaching infants. Thirty-five infants aged 2.9 months at the onset of the study were randomly assigned to 1 of 3 groups. Two groups received a 14-day intervention to distinct reaching tasks: (1) in a contingent group, a toy target moved and sounded upon contact only, and (2) in a continuous group, the toy moved and sounded continuously, independent of hand-toy contact. A third control group did not receive any intervention; this group's performance was assessed only on 2 days at a 15-day interval. Results revealed that infants in the contingent group made the most progress over time compared to the two other groups. Infants in this group made significantly more overall contacts with the sounding/moving toy, and they increased their rate of visually attended target contacts relative to non-visually attended target contacts compared to the continuous and control groups. Infants in the continuous group did not differ from the control group on the number of hand-toy contacts nor did they show a change in visually attended target versus non-visually attended target contacts ratio over time. However, they did show an increase in movement speed, presumably in an attempt to attain the moving toy. These findings highlight the importance of contingent movement consequences as a critical reinforcer for the selection of action and motor learning in early development. Through repeated opportunities to explore movement consequences, infants discover and select movements that are most successful to the task-at-hand. This study further demonstrates that distinct sensory-motor experiences can have a significant impact on developmental trajectories and can influence the skills young infants will discover through their interactions with their surroundings.
Collapse
Affiliation(s)
- Joshua L Williams
- Department of Psychology, Armstrong State University Savannah, GA, USA
| | - Daniela Corbetta
- Department of Psychology, The University of Tennessee Knoxville, TN, USA
| |
Collapse
|
29
|
Petrini K, Jones PR, Smith L, Nardini M. Hearing Where the Eyes See: Children Use an Irrelevant Visual Cue When Localizing Sounds. Child Dev 2015; 86:1449-57. [DOI: 10.1111/cdev.12397] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
30
|
Bahrick LE, Lickliter R, Castellanos I, Todd JT. Intrasensory Redundancy Facilitates Infant Detection of Tempo: Extending Predictions of the Intersensory Redundancy Hypothesis. INFANCY 2015; 20:377-404. [PMID: 26207101 PMCID: PMC4508026 DOI: 10.1111/infa.12081] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 02/23/2015] [Indexed: 11/26/2022]
Abstract
Research has demonstrated that intersensory redundancy (stimulation synchronized across multiple senses) is highly salient and facilitates processing of amodal properties in multimodal events, bootstrapping early perceptual development. The present study is the first to extend this central principle of the intersensory redundancy hypothesis (IRH) to certain types of intrasensory redundancy (stimulation synchronized within a single sense). Infants were habituated to videos of a toy hammer tapping silently (unimodal control), depicting intersensory redundancy (synchronized with a soundtrack) or intrasensory redundancy (synchronized with another visual event; light flashing or bat tapping). In Experiment 1, 2-month-olds showed both intersensory and intrasensory facilitation (with respect to the unimodal control) for detecting a change in tempo. However, intrasensory facilitation was found when the hammer was synchronized with the light flashing (different motion) but not with the bat tapping (same motion). Experiment 2 tested 3-month-olds using a somewhat easier tempo contrast. Results supported a similarity hypothesis: intrasensory redundancy between two dissimilar events was more effective than that between two similar events for promoting processing of amodal properties. These findings extend the IRH and indicate that in addition to intersensory redundancy, intrasensory redundancy between two synchronized dissimilar visual events is also effective in promoting perceptual processing of amodal event properties.
Collapse
Affiliation(s)
| | - Robert Lickliter
- Department of Psychology, Florida International University, Miami, FL
| | - Irina Castellanos
- Department of Otolaryngology Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN
| | | |
Collapse
|
31
|
Reynolds GD. Infant visual attention and object recognition. Behav Brain Res 2015; 285:34-43. [PMID: 25596333 PMCID: PMC4380660 DOI: 10.1016/j.bbr.2015.01.015] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Revised: 01/06/2015] [Accepted: 01/08/2015] [Indexed: 11/26/2022]
Abstract
This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy.
Collapse
Affiliation(s)
- Greg D Reynolds
- Department of Psychology, University of Tennessee, Knoxville, TN 37996, United States.
| |
Collapse
|
32
|
Dionne-Dostie E, Paquette N, Lassonde M, Gallagher A. Multisensory integration and child neurodevelopment. Brain Sci 2015; 5:32-57. [PMID: 25679116 PMCID: PMC4390790 DOI: 10.3390/brainsci5010032] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2014] [Accepted: 01/27/2015] [Indexed: 12/17/2022] Open
Abstract
A considerable number of cognitive processes depend on the integration of multisensory information. The brain integrates this information, providing a complete representation of our surrounding world and giving us the ability to react optimally to the environment. Infancy is a period of great changes in brain structure and function that are reflected by the increase of processing capacities of the developing child. However, it is unclear if the optimal use of multisensory information is present early in childhood or develops only later, with experience. The first part of this review has focused on the typical development of multisensory integration (MSI). We have described the two hypotheses on the developmental process of MSI in neurotypical infants and children, and have introduced MSI and its neuroanatomic correlates. The second section has discussed the neurodevelopmental trajectory of MSI in cognitively-challenged infants and children. A few studies have brought to light various difficulties to integrate sensory information in children with a neurodevelopmental disorder. Consequently, we have exposed certain possible neurophysiological relationships between MSI deficits and neurodevelopmental disorders, especially dyslexia and attention deficit disorder with/without hyperactivity.
Collapse
Affiliation(s)
- Emmanuelle Dionne-Dostie
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| | - Natacha Paquette
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| | - Maryse Lassonde
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| | - Anne Gallagher
- Sainte-Justine University Hospital Research Center, Montreal H3T1C5, QC, Canada.
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Departement of Psychology, University of Montreal, C.P. 6128, Montreal H3C3J7, QC, Canada.
| |
Collapse
|
33
|
Bahrick LE, Lickliter R. Learning to Attend Selectively: The Dual Role of Intersensory Redundancy. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2014; 23:414-420. [PMID: 25663754 DOI: 10.1177/0963721414549187] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Selective attention is the gateway to perceptual processing, learning, and memory, and is a skill honed through extensive experience. However, little research has focused on how selective attention develops. Here we synthesize established and new findings assessing the central role of redundancy across the senses in guiding and constraining this process in infancy and early childhood. We highlight research demonstrating the dual role of intersensory redundancy -- its facilitating and interfering effects-- on detection and perceptual processing of various properties of objects and events.
Collapse
|
34
|
ter Schure S, Mandell DJ, Escudero P, Raijmakers MEJ, Johnson SP. Learning Stimulus-Location Associations in 8- and 11-Month-Old Infants: Multimodal versus Unimodal Information. INFANCY 2014; 19:476-495. [PMID: 25147483 PMCID: PMC4136389 DOI: 10.1111/infa.12057] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2013] [Accepted: 06/17/2014] [Indexed: 11/29/2022]
Abstract
Research on the influence of multimodal information on infants' learning is inconclusive. While one line of research finds that multimodal input has a negative effect on learning, another finds positive effects. The present study aims to shed some new light on this discussion by studying the influence of multimodal information and accompanying stimulus complexity on the learning process. We assessed the influence of multimodal input on the trial-by-trial learning of 8- and 11-month-old infants. Using an anticipatory eye movement paradigm, we measured how infants learn to anticipate the correct stimulus-location associations when exposed to visual-only, auditory-only (unimodal), or auditory and visual (multimodal) information. Our results show that infants in both the multimodal and visual-only conditions learned the stimulus-location associations. Although infants in the visual-only condition appeared to learn in fewer trials, infants in the multimodal condition showed better anticipating behavior: as a group, they had a higher chance of anticipating correctly on more consecutive trials than infants in the visual-only condition. These findings suggest that effects of multimodal information on infant learning operate chiefly through effects on infants' attention.
Collapse
Affiliation(s)
| | | | - Paola Escudero
- Cognitive Science Center Amsterdam, University of Amsterdam
- MARCS Institute, University of Western Sydney
| | | | | |
Collapse
|
35
|
Bahrick LE, Krogh-Jespersen S, Argumosa MA, Lopez H. Intersensory redundancy hinders face discrimination in preschool children: evidence for visual facilitation. Dev Psychol 2014; 50:414-21. [PMID: 23795552 PMCID: PMC3913744 DOI: 10.1037/a0033476] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of intersensory redundancy provided by audiovisual speech and enhanced when intersensory redundancy is absent. Evidence of this visual facilitation and intersensory interference was found in a recent study of 2-month-old infants (Bahrick, Lickliter, & Castellanos, in press). The present study is the first to extend tests of this principle of the IRH to children. Using a more difficult face recognition task in the context of a story, results from 4-year-old children paralleled those of infants and demonstrate that face discrimination in children is also facilitated by dynamic, visual-only exposure, in the absence of intersensory redundancy.
Collapse
Affiliation(s)
| | | | | | - Hassel Lopez
- Department of Psychology, Florida International University
| |
Collapse
|