1
|
Ganea N, Addyman C, Yang J, Bremner A. Effects of multisensory stimulation on infants' learning of object pattern and trajectory. Child Dev 2024. [PMID: 39105480 DOI: 10.1111/cdev.14147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
This study investigated whether infants encode better the features of a briefly occluded object if its movements are specified simultaneously by vision and audition than if they are not (data collected: 2017-2019). Experiment 1 showed that 10-month-old infants (N = 39, 22 females, White-English) notice changes in the visual pattern on the object irrespective of the stimulation received (spatiotemporally congruent audio-visual stimulation, incongruent stimulation, or visual-only;η p 2 $$ {\eta}_{\mathrm{p}}^2 $$ = .53). Experiment 2 (N = 72, 36 female) found similar results in 6-month-olds (Test Block 1,η p 2 $$ {\eta}_{\mathrm{p}}^2 $$ = .13), but not 4-month-olds. Experiment 3 replicated this finding with another group of 6-month-olds (N = 42, 21 females) and showed that congruent stimulation enables infants to detect changes in object trajectory (d = 0.56) in addition to object pattern (d = 1.15), whereas incongruent stimulation hinders performance.
Collapse
Affiliation(s)
- Nataşa Ganea
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Caspar Addyman
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Jiale Yang
- School of Psychology, Chukyo University, Nagoya, Japan
| | - Andrew Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
2
|
Lozano I, Belinchón M, Campos R. Sensitivity to temporal synchrony and selective attention in audiovisual speech in infants at elevated likelihood for autism: A preliminary longitudinal study. Infant Behav Dev 2024; 76:101973. [PMID: 38941721 DOI: 10.1016/j.infbeh.2024.101973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/30/2024]
Abstract
Autism Spectrum Disorder is a highly heritable condition characterized by sociocommunicative difficulties, frequently entailing language atypicalities that extend to infants with a familial history of autism. The developmental mechanisms underlying these difficulties remain unknown. Detecting temporal synchrony between the lip movements and the auditory speech of a talking face and selectively attending to the mouth support typical early language acquisition. This preliminary eye-tracking study investigated whether these two fundamental mechanisms atypically function in infant siblings. We longitudinally tracked the trajectories of infants at elevated and low-likelihood for autism in these two abilities at 4, 8, and 12 months (n = 29). We presented two talking faces (synchronous and asynchronous) while recording infants' gaze to the talker's eyes and mouth. We found that infants detected temporal asynchronies in talking faces at 12 months regardless of group. However, compared to their typically developing peers, infants with an elevated likelihood of autism showed reduced attention to the mouth at the end of the first year and no variations in their interest to this area across time. Our findings provide preliminary evidence on a potentially atypical trajectory of reduced mouth-looking in audiovisual speech during the first year in infant siblings, with potential cascading consequences for language development, thus contributing to domain-general accounts of emerging autism.
Collapse
Affiliation(s)
- Itziar Lozano
- Department of Basic Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain; Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.
| | - Mercedes Belinchón
- Department of Basic Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain.
| | - Ruth Campos
- Department of Basic Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Madrid, Spain.
| |
Collapse
|
3
|
Cook SW, Wernette EMD, Valentine M, Aldugom M, Pruner T, Fenn KM. How Prior Knowledge, Gesture Instruction, and Interference After Instruction Interact to Influence Learning of Mathematical Equivalence. Cogn Sci 2024; 48:e13412. [PMID: 38402447 DOI: 10.1111/cogs.13412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 01/18/2024] [Accepted: 01/21/2024] [Indexed: 02/26/2024]
Abstract
Although children learn more when teachers gesture, it is not clear how gesture supports learning. Here, we sought to investigate the nature of the memory processes that underlie the observed benefits of gesture on lasting learning. We hypothesized that instruction with gesture might create memory representations that are particularly resistant to interference. We investigated this possibility in a classroom study with 402 second- and third-grade children. Participants received classroom-level instruction in mathematical equivalence using videos with or without accompanying gesture. After instruction, children solved problems that were either visually similar to the problems that were taught, and consistent with an operational interpretation of the equal sign (interference), or visually distinct from equivalence problems and without an equal sign (control) in order to assess the role of gesture in resisting interference after learning. Gesture facilitated learning, but the effects of gesture and interference varied depending on type of problem being solved and the strategies that children used to solve problems prior to instruction. Some children benefitted from gesture, while others did not. These findings have implications for understanding the mechanisms underlying the beneficial effect of gesture on mathematical learning, revealing that gesture does not work via a general mechanism like enhancing attention or engagement that would apply to children with all forms of prior knowledge.
Collapse
Affiliation(s)
| | | | | | - Mary Aldugom
- Department of Psychological and Brain Sciences, University of Iowa
| | - Todd Pruner
- Department of Psychological and Brain Sciences, University of Iowa
| | | |
Collapse
|
4
|
Birulés J, Goupil L, Josse J, Fort M. The Role of Talking Faces in Infant Language Learning: Mind the Gap between Screen-Based Settings and Real-Life Communicative Interactions. Brain Sci 2023; 13:1167. [PMID: 37626523 PMCID: PMC10452843 DOI: 10.3390/brainsci13081167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last few decades, developmental (psycho) linguists have demonstrated that perceiving talking faces audio-visually is important for early language acquisition. Using mostly well-controlled and screen-based laboratory approaches, this line of research has shown that paying attention to talking faces is likely to be one of the powerful strategies infants use to learn their native(s) language(s). In this review, we combine evidence from these screen-based studies with another line of research that has studied how infants learn novel words and deploy their visual attention during naturalistic play. In our view, this is an important step toward developing an integrated account of how infants effectively extract audiovisual information from talkers' faces during early language learning. We identify three factors that have been understudied so far, despite the fact that they are likely to have an important impact on how infants deploy their attention (or not) toward talking faces during social interactions: social contingency, speaker characteristics, and task- dependencies. Last, we propose ideas to address these issues in future research, with the aim of reducing the existing knowledge gap between current experimental studies and the many ways infants can and do effectively rely upon the audiovisual information extracted from talking faces in their real-life language environment.
Collapse
Affiliation(s)
- Joan Birulés
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Louise Goupil
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Jérémie Josse
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Mathilde Fort
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
- Centre de Recherche en Neurosciences de Lyon, INSERM U1028-CNRS UMR 5292, Université Lyon 1, 69500 Bron, France
| |
Collapse
|
5
|
Edgar EV, Todd JT, Eschman B, Hayes T, Bahrick LE. Effects of English versus Spanish language exposure on basic multisensory attention skills across 3 to 36 months of age. Dev Psychol 2023; 59:1359-1376. [PMID: 37199930 PMCID: PMC10523924 DOI: 10.1037/dev0001549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Recent research has demonstrated that individual differences in infant attention to faces and voices of women speaking predict language outcomes in childhood. These findings have been generated using two new audiovisual attention assessments appropriate for infants and young children, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP). The MAAP and IPEP assess three basic attention skills (sustaining attention, shifting/disengaging, intersensory matching), as well as distractibility, deployed in the context of naturalistic audiovisual social (women speaking English) and nonsocial events (objects impacting a surface). Might children with differential exposure to Spanish versus English show different patterns of attention to social events on these protocols as a function of language familiarity? We addressed this question in several ways using children (n = 81 dual-language learners; n = 23 monolingual-language learners) from South Florida, tested longitudinally across 3-36 months. Surprisingly, results indicated no significant English language advantage on any attention measure for children from monolingual English versus dual English-Spanish language environments. Second, for dual-language learners, exposure to English changed across age, decreasing slightly from 3-12 months and then increasing considerably by 36 months. Furthermore, for dual-language learners, structural equation modeling analyses revealed no English language advantage on the MAAP or IPEP as a function of degree of English language exposure. The few relations found were in the direction of greater performance for children with greater Spanish exposure. Together, findings indicate no English language advantage for basic multisensory attention skills assessed by the MAAP or IPEP between the ages of 3 to 36 months. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
6
|
Edgar EV, Eschman B, Todd JT, Testa K, Ramirez B, Bahrick LE. The effects of socioeconomic status on working memory in childhood are partially mediated by intersensory processing of audiovisual events in infancy. Infant Behav Dev 2023; 72:101844. [PMID: 37271061 PMCID: PMC10527496 DOI: 10.1016/j.infbeh.2023.101844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 04/26/2023] [Accepted: 05/19/2023] [Indexed: 06/06/2023]
Abstract
Socioeconomic status (SES) is a well-established predictor of individual differences in childhood language and cognitive functioning, including executive functions such as working memory. In infancy, intersensory processing-selectively attending to properties of events that are redundantly specified across the senses at the expense of non-redundant, irrelevant properties-also predicts language development. Our recent research demonstrates that individual differences in intersensory processing in infancy predict a variety of language outcomes in childhood, even after controlling for SES. However, relations among intersensory processing and cognitive outcomes such as working memory have not yet been investigated. Thus, the present study examines relations between intersensory processing in infancy and working memory in early childhood, and the role of SES in this relation. Children (N = 101) received the Multisensory Attention Assessment Protocol at 12-months to assess intersensory processing (face-voice and object-sound matching) and received the WPPSI at 36-months to assess working memory. SES was indexed by maternal education, paternal education, and income. A variety of novel findings emerged. 1) Individual differences in intersensory processing at 12-months predicted working memory at 36-months of age even after controlling for SES. 2) Individual differences in SES predicted intersensory processing at 12-months of age. 3) The well-established relation between SES and working memory was partially mediated by intersensory processing. Children from families of higher-SES have better intersensory processing skills at 12-months and this combination of factors predicts greater working memory two years later at 36-months. Together these findings reveal the role of intersensory processing in cognitive functioning.
Collapse
Affiliation(s)
- Elizabeth V Edgar
- Yale Child Study Center, Yale University School of Medicine, United States.
| | - Bret Eschman
- Department of Psychology, University of Tennessee at Chattanooga, United States
| | | | - Kaitlyn Testa
- Department of Psychology, Florida International University, United States
| | - Bethany Ramirez
- Department of Psychology, Florida International University, United States
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, United States.
| |
Collapse
|
7
|
Edgar EV, Todd JT, Bahrick LE. Intersensory processing of faces and voices at 6 months predicts language outcomes at 18, 24, and 36 months of age. INFANCY 2023; 28:569-596. [PMID: 36760157 PMCID: PMC10564323 DOI: 10.1111/infa.12533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/04/2023] [Accepted: 01/13/2023] [Indexed: 02/11/2023]
Abstract
Intersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine-grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12-month intersensory processing of face-voice synchrony predicts language outcomes at 18- and 24-months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine-grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3- and 6-months predicts language outcomes at 12-, 18-, 24-, and 36-months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6-months (but not 3-months) accounted for significant unique variance in language outcomes at 18-, 24-, and 36-months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face-voice synchrony as a foundation for language development as early as 6-months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5-years later.
Collapse
|
8
|
Lickliter R, Bahrick LE, Vaillant-Mekras J. The role of task difficulty in directing selective attention in bobwhite quail (Colinus virginianus) neonates: A developmental test of the intersensory redundancy hypothesis. Dev Psychobiol 2023; 65:e22381. [PMID: 36946684 DOI: 10.1002/dev.22381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 02/24/2023]
Abstract
The dynamics of selective attention necessarily influences the course of early perceptual development. The intersensory redundancy hypothesis proposes that in early development information presented redundantly across two or more senses selectively recruits attention to the amodal properties of an object or event. In contrast, information presented to a single sense enhances attention to modality-specific properties. The present study assessed the second of these predictions in neonatal bobwhite quail (Colinus virginianus), with a focus on the role of task difficulty in directing selective attention. In Experiment 1, we exposed quail chicks to unimodal auditory, nonredundant audiovisual, or redundant audiovisual presentations of a bobwhite maternal call paired with a pulsing light for 10 min/h on the day following hatching. Chicks were subsequently individually tested 24 h later for their unimodal auditory preference between the familiarized maternal call and the same call with pitch altered by two steps. Chicks from all experimental groups preferred the familiarized maternal call over the altered maternal call. In Experiment 2, we repeated the exposure conditions of Experiment 1, but presented a more difficult task by narrowing the pitch range between the two maternal calls during testing. Chicks in the unimodal auditory and nonredundant audiovisual conditions preferred the familiarized call, whereas chicks in the redundant audiovisual exposure group showed no detection of the pitch change. Our results indicate that early discrimination of pitch change is disrupted by intersensory redundancy under difficult but not easy task conditions. These findings, along with findings from human infants, highlight the role of task difficulty in shifting attentional selectivity and underscore the dynamic nature of neonatal attentional salience hierarchies.
Collapse
Affiliation(s)
- Robert Lickliter
- Department of Psychology, Florida International University, Miami, Florida, USA
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, Miami, Florida, USA
| | | |
Collapse
|
9
|
Belteki Z, van den Boomen C, Junge C. Face-to-face contact during infancy: How the development of gaze to faces feeds into infants' vocabulary outcomes. Front Psychol 2022; 13:997186. [PMID: 36389540 PMCID: PMC9650530 DOI: 10.3389/fpsyg.2022.997186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/03/2022] [Indexed: 08/10/2023] Open
Abstract
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants' gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
Collapse
|
10
|
Nicklas A, Rückel LM, Noël B, Varga M, Kleinert J, Boss M, Klatt S. Gaze behavior in social interactions between beach volleyball players—An exploratory approach. Front Psychol 2022; 13:945389. [DOI: 10.3389/fpsyg.2022.945389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/13/2022] [Indexed: 11/13/2022] Open
Abstract
Previous research has indicated that social interactions and gaze behavior analyses in a group setting could be essential tools in accomplishing group objectives. However, only a few studies have examined the impact of social interactions on group dynamics in team sports and their influence on team performance. This study aimed to investigate the effects of game performance pressure on the gaze behavior within social interactions between beach volleyball players during game-like situations. Therefore, 18 expert beach volleyball players conducted a high and a low game performance pressure condition while wearing an eye tracking system. The results indicate that higher game performance pressure leads to more and longer fixation on teammates’ faces. A higher need for communication without misunderstandings could explain this adaptation. The longer and more frequent look at the face could improve the receiving of verbal and non-verbal information of the teammate’s face. Further, players showed inter-individual strategies to cope with high game performance pressure regarding their gaze behavior, for example, increasing the number of fixations and the fixation duration on the teammate’s face. Thereby, this study opens a new avenue for research on social interaction and how it is influenced in/through sport.
Collapse
|
11
|
Ross-Sheehy S, Eschman B, Reynolds EE. Seeing and looking: Evidence for developmental and stimulus-dependent changes in infant scanning efficiency. PLoS One 2022; 17:e0274113. [PMID: 36112722 PMCID: PMC9481018 DOI: 10.1371/journal.pone.0274113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 08/22/2022] [Indexed: 11/18/2022] Open
Abstract
Though previous work has examined infant attention across a variety of tasks, less is known about the individual saccades and fixations that make up each bout of attention, and how individual differences in saccade and fixation patterns (i.e., scanning efficiency) change with development, scene content and perceptual load. To address this, infants between the ages of 5 and 11 months were assessed longitudinally (Experiment 1) and cross-sectionally (Experiment 2). Scanning efficiency (fixation duration, saccade rate, saccade amplitude, and saccade velocity) was assessed while infants viewed six quasi-naturalistic scenes that varied in content (social or non-social) and scene complexity (3, 6 or 9 people/objects). Results from Experiment 1 revealed moderate to strong stability of individual differences in saccade rate, mean fixation duration, and saccade amplitude, and both experiments revealed 5-month-old infants to make larger, faster, and more frequent saccades than older infants. Scanning efficiency was assessed as the relation between fixation duration and saccade amplitude, and results revealed 11-month-olds to have high scanning efficiency across all scenes. However, scanning efficiency also varied with scene content, such that all infants showing higher scanning efficiency when viewing social scenes, and more complex scenes. These results suggest both developmental and stimulus-dependent changes in scanning efficiency, and further highlight the use of saccade and fixation metrics as a sensitive indicator of cognitive processing.
Collapse
Affiliation(s)
- Shannon Ross-Sheehy
- Department of Psychology, University of Tennessee, Knoxville, TN, United States of America
- * E-mail:
| | - Bret Eschman
- Department of Psychology, University of Tennessee at Chattanooga, Chattanooga, TN, United States of America
| | - Esther E. Reynolds
- Department of Psychology, University of Tennessee, Knoxville, TN, United States of America
| |
Collapse
|
12
|
The temporal dynamics of labelling shape infant object recognition. Infant Behav Dev 2022; 67:101698. [DOI: 10.1016/j.infbeh.2022.101698] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 12/06/2021] [Accepted: 01/26/2022] [Indexed: 11/22/2022]
|
13
|
Franco F, Suttora C, Spinelli M, Kozar I, Fasolo M. Singing to infants matters: Early singing interactions affect musical preferences and facilitate vocabulary building. JOURNAL OF CHILD LANGUAGE 2022; 49:552-577. [PMID: 33908341 DOI: 10.1017/s0305000921000167] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This research revealed that the frequency of reported parent-infant singing interactions predicted 6-month-old infants' performance in laboratory music experiments and mediated their language development in the second year. At 6 months, infants (n = 36) were tested using a preferential listening procedure assessing their sustained attention to instrumental and sung versions of the same novel tunes whilst the parents completed an ad-hoc questionnaire assessing home musical interactions with their infants. Language development was assessed with a follow-up when the infants were 14-month-old (n = 26). The main results showed that 6-month-olds preferred listening to sung rather than instrumental melodies, and that self-reported high levels of parental singing with their infants [i] were associated with less pronounced preference for the sung over the instrumental version of the tunes at 6 months, and [ii] predicted significant advantages on the language outcomes in the second year. The results are interpreted in relation to conceptions of developmental plasticity.
Collapse
Affiliation(s)
- Fabia Franco
- Department of Psychology, Middlesex University, London, UK
| | - Chiara Suttora
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Maria Spinelli
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Iryna Kozar
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| | - Mirco Fasolo
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| |
Collapse
|
14
|
Bastianello T, Keren-Portnoy T, Majorano M, Vihman M. Infant looking preferences towards dynamic faces: A systematic review. Infant Behav Dev 2022; 67:101709. [PMID: 35338995 DOI: 10.1016/j.infbeh.2022.101709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 02/28/2022] [Accepted: 03/06/2022] [Indexed: 11/25/2022]
Abstract
Although the pattern of visual attention towards the region of the eyes is now well-established for infants at an early stage of development, less is known about the extent to which the mouth attracts an infant's attention. Even less is known about the extent to which these specific looking behaviours towards different regions of the talking face (i.e., the eyes or the mouth) may impact on or account for aspects of language development. The aim of the present systematic review is to synthesize and analyse (i) which factors might determine different looking patterns in infants during audio-visual tasks using dynamic faces and (ii) how these patterns have been studied in relation to aspects of the baby's development. Four bibliographic databases were explored, and the records were selected following specified inclusion criteria. The search led to the identification of 19 papers (October 2021). Some studies have tried to clarify the role played by audio-visual support in speech perception and early production based on directly related factors such as the age or language background of the participants, while others have tested the child's competence in terms of linguistic or social skills. Several hypotheses have been advanced to explain the selective attention phenomenon. The results of the selected studies have led to different lines of interpretation. Some suggestions for future research are outlined.
Collapse
Affiliation(s)
| | | | | | - Marilyn Vihman
- Department of Language and Linguistic Science, University of York, UK
| |
Collapse
|
15
|
From Hemispheric Asymmetry through Sensorimotor Experiences to Cognitive Outcomes in Children with Cerebral Palsy. Symmetry (Basel) 2022. [DOI: 10.3390/sym14020345] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Recent neuroimaging studies allowed us to explore abnormal brain structures and interhemispheric connectivity in children with cerebral palsy (CP). Behavioral researchers have long reported that children with CP exhibit suboptimal performance in different cognitive domains (e.g., receptive and expressive language skills, reading, mental imagery, spatial processing, subitizing, math, and executive functions). However, there has been very limited cross-domain research involving these two areas of scientific inquiry. To stimulate such research, this perspective paper proposes some possible neurological mechanisms involved in the cognitive delays and impairments in children with CP. Additionally, the paper examines the ways motor and sensorimotor experience during the development of these neural substrates could enable more optimal development for children with CP. Understanding these developmental mechanisms could guide more effective interventions to promote the development of both sensorimotor and cognitive skills in children with CP.
Collapse
|
16
|
Amico G, Schaefer S. Negative Effects of Embodiment in a Visuo-Spatial Working Memory Task in Children, Young Adults, and Older Adults. Front Psychol 2021; 12:688174. [PMID: 34589020 PMCID: PMC8473613 DOI: 10.3389/fpsyg.2021.688174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/16/2021] [Indexed: 11/18/2022] Open
Abstract
Studies examining the effect of embodied cognition have shown that linking one's body movements to a cognitive task can enhance performance. The current study investigated whether concurrent walking while encoding or recalling spatial information improves working memory performance, and whether 10-year-old children, young adults, or older adults (M age = 72 years) are affected differently by embodiment. The goal of the Spatial Memory Task was to encode and recall sequences of increasing length by reproducing positions of target fields in the correct order. The nine targets were positioned in a random configuration on a large square carpet (2.5 m × 2.5 m). During encoding and recall, participants either did not move, or they walked into the target fields. In a within-subjects design, all possible combinations of encoding and recall conditions were tested in counterbalanced order. Contrary to our predictions, moving particularly impaired encoding, but also recall. These negative effects were present in all age groups, but older adults' memory was hampered even more strongly by walking during encoding and recall. Our results indicate that embodiment may not help people to memorize spatial information, but can create a dual-task situation instead.
Collapse
Affiliation(s)
- Gianluca Amico
- Department of Sport Sciences, Saarland University, Saarbrücken, Germany
| | | |
Collapse
|
17
|
Hearing Better with the Right Eye? The Lateralization of Multisensory Processing Affects Auditory Learning in Northern Bobwhite Quail ( Colinus Virginianus) Chicks. Appl Anim Behav Sci 2021; 236. [PMID: 33776174 DOI: 10.1016/j.applanim.2021.105274] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Precocial avian species exhibit a high degree of lateralization of perceptual and motor abilities, including preferential eye use for tasks such as social recognition and predator detection. Such lateralization has been related, in part, to differential experience prior to hatch. That is, due to spatial and resulting postural constraints late in incubation, one eye and hemisphere-generally the right eye / left hemisphere-receive greater amounts of stimulation than the contralateral eye / hemisphere. This raises the possibility that the left hemisphere may specialize or show relative advantages in integrating information across visual and auditory modalities, given that it typically receives greater amounts of multimodal auditory and visual stimulation prior to hatch. The present study represents an initial investigation of this question in a precocial avian species, the Northern bobwhite quail (Colinus virginianus). Day-old bobwhite chicks received 5 min training sessions in which they vocalized to receive contingent playback of a bobwhite maternal call, presented with or without a light that flashed in synchrony with the notes of the call (i.e., bimodal versus unimodal exposure, respectively). Chicks were trained with or without eye patches that allowed them to experience the visual component of the bimodal stimulus with only the left eye (LE), right eye (RE), or both eyes (i.e., binocular; BIN). Finally, the light was placed in various positions relative to the speakers playing the maternal call across three experiments. 24 hrs later chicks were provided a simultaneous choice test between the familiarized and a novel bobwhite maternal call. Given that the right eye and ear typically face outward and are thus unoccluded by the body during late prenatal development, we hypothesized that RE chicks would show facilitated learning under bimodal conditions compared to all other training conditions. This hypothesis was partially confirmed in Experiment 1, when the light was positioned 40 cm above the source of the maternal call. However, we also observed evidence of suppressed learning in chicks provided BIN exposure to the bimodal audio-visual stimulus that was not present during auditory-only training. Experiments 2 and 3 demonstrated that this was likely related to activation of a left-hemisphere dependent fear response when the left eye was exposed to a visual stimulus that loomed above the auditory stimulus. These results indicate that multisensory processing is lateralized in a precocial bird and that these species may thus provide a unique model for studying experience-dependent plasticity of intersensory perception.
Collapse
|
18
|
Macari S, Milgramm A, Reed J, Shic F, Powell KK, Macris D, Chawarska K. Context-Specific Dyadic Attention Vulnerabilities During the First Year in Infants Later Developing Autism Spectrum Disorder. J Am Acad Child Adolesc Psychiatry 2021; 60:166-175. [PMID: 32061926 PMCID: PMC9524139 DOI: 10.1016/j.jaac.2019.12.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Revised: 12/18/2019] [Accepted: 02/07/2020] [Indexed: 12/27/2022]
Abstract
OBJECTIVE Although some eye-tracking studies demonstrate atypical attention to faces by 6 months of age in autism spectrum disorder (ASD), behavioral studies in early infancy return largely negative results. We examined the effects of context and diagnosis on attention to faces during face-to-face live interactions in infants at high familial risk (HR) and low familial risk (LR) for ASD. METHOD Participants were 6-, 9-, and 12-month-old siblings of children with ASD who were later determined to have ASD (n = 21), other developmental challenges (HR-C; n = 74), or typical development (TD) (HR-TD; n = 32), and low-risk, typically developing controls (LR-TD; n = 49). Infants were administered the social orienting probes task, consisting of five conditions: dyadic bid, song, peek-a-boo, tickle, and toy play. Attention to an unfamiliar examiner's face was coded by blinded raters from video recordings. RESULTS At all ages, the ASD group spent less time looking at the examiner's face than the HR-C, HR-TD, and LR-TD groups during the Dyadic Bid and Tickle conditions (all p <.05), but not during the Song, Peek-a-Boo, or Toy Play conditions (all p >.23). Lower attention to faces during Dyadic Bid and Tickle conditions was significantly correlated with higher severity of autism symptoms at 18 months. CONCLUSION During the prodromal stages of the disorder, infants with ASD exhibited subtle impairments in attention to faces of interactive partners during interactions involving eye contact and child-directed speech (with and without physical contact), but not in contexts involving singing, familiar anticipatory games, or toy play. Considering the convergence with eye-tracking findings on limited attention to faces in infants later diagnosed with ASD, reduced attention to faces of interactive partners in specific contexts may constitute a promising candidate behavioral marker of ASD in infancy.
Collapse
Affiliation(s)
- Suzanne Macari
- Yale Child Study Center, Yale School of Medicine, New Haven, Connecticut.
| | | | | | | | | | | | | |
Collapse
|
19
|
Cowan N, AuBuchon AM, Gilchrist AL, Blume CL, Boone AP, Saults JS. Developmental change in the nature of attention allocation in a dual task. Dev Psychol 2021; 57:33-46. [PMID: 33271032 PMCID: PMC7959247 DOI: 10.1037/dev0001134] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Younger children have more difficulty in sharing attention between two concurrent tasks than do older participants, but in addition to this developmental change, we documented changes in the nature of attention sharing. We studied children 6-8 and 10-14 years old and college students (in all, 104 women and 76 men; 3% Hispanic, 3% Black or African American, 3% Asian, 7% multiracial, and 84% White). On each dual-task trial, the participant received an array of colored squares to be retained for a subsequent probe recognition test and then an easy or more difficult signal requiring a quick response (a speeded task, clicking a key on the same side of the screen as the signal or the opposite side). Finally, each trial ended with the presentation of the array item recognition probe and the participant's response to it. In our youngest age group (6-8 years), array memory was often displaced by the speeded task performed under load, especially when it was the opposite-side task, but speeded-task accuracies were unaffected by the presence of an array memory load. In contrast, in older participants (10-14 years and college students), the memory load was maintained better, with some cost to the speeded task. With maturity, participants were better able to adopt a proactive stance in which not only present processing demands but also upcoming demands were taken into account, allowing them to balance the demands of the two tasks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Nelson Cowan
- Department of Psychological Sciences, University of Missouri
| | | | | | | | | | - J Scott Saults
- Department of Psychological Sciences, University of Missouri
| |
Collapse
|
20
|
Abstract
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other's mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world.
Collapse
Affiliation(s)
- Lucas Battich
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany.
- Graduate School of Systemic Neurosciences, Ludwig Maximilian University Munich, Munich, Germany.
| | - Merle Fairhurst
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institut für Psychologie, Fakultät für Humanwissenschaften, Universität der Bundeswehr München, Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, London, UK
| |
Collapse
|
21
|
Moore DS, Johnson SP. The development of mental rotation ability across the first year after birth. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2020; 58:1-33. [PMID: 32169193 DOI: 10.1016/bs.acdb.2020.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Mental rotation (MR) is the ability to imagine the appearance of an object from a different perspective. This ability is involved in many human cognitive and behavioral activities. We discuss studies that have examined MR in infants and its development across the first year after birth. Despite some conflicting findings across these studies, several conclusions can be reached. First, MR may be available to human infants as young as 3 months of age. Second, MR processes in infancy may be similar or identical to MR processes later in life. Third, there may be sex differences in MR performance, in general favoring males. Fourth, there appear to be multiple influences on infants' MR performance, including infants' motor activity, stimulus complexity, hormones, and parental attitudes. We conclude by calling for additional research to examine more carefully the causes and consequences of MR abilities early in life.
Collapse
Affiliation(s)
- David S Moore
- Pitzer College and Claremont Graduate University, Claremont, CA, United States.
| | | |
Collapse
|
22
|
Johnson SP, Moore DS. Spatial Thinking in Infancy: Origins and Development of Mental Rotation Between 3 and 10 Months of Age. Cogn Res Princ Implic 2020; 5:10. [PMID: 32124099 PMCID: PMC7052106 DOI: 10.1186/s41235-020-00212-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 02/11/2020] [Indexed: 11/11/2022] Open
Abstract
Mental rotation (MR) is the ability to transform a mental representation of an object so as to accurately predict how the object would look from a different angle (Sci 171:701-703, 1971), and it is involved in a number of important cognitive and behavioral activities. In this review we discuss recent studies that have examined MR in infants and the development of MR across the first year after birth. These studies have produced many conflicting results, yet several tentative conclusions can be reached. First, MR may be operational in infants as young as 3 months of age. Second, there may be sex differences in MR performance in infancy, in general favoring males, as there are in children and in adults. Third, there appear to be multiple influences on infants' MR performance, including infants' motor activity, stimulus or task complexity, hormones, and parental attitudes. We conclude by calling for additional research to examine more carefully the causes and consequences of MR abilities early in life.
Collapse
Affiliation(s)
| | - David S. Moore
- Pitzer College and Claremont Graduate University, 1050 N. Mills Avenue, Claremont, CA 91711 USA
| |
Collapse
|
23
|
Bahrick LE, McNew ME, Pruden SM, Castellanos I. Intersensory redundancy promotes infant detection of prosody in infant-directed speech. J Exp Child Psychol 2019; 183:295-309. [PMID: 30954804 PMCID: PMC6980335 DOI: 10.1016/j.jecp.2019.02.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 02/13/2019] [Accepted: 02/14/2019] [Indexed: 10/27/2022]
Abstract
Prosody, or the intonation contours of speech, conveys emotion and intention to the listener and provides infants with an early basis for detecting meaning in speech. Infant-directed speech (IDS) is characterized by exaggerated prosody, slower tempo, and elongated pauses, all amodal properties detectable across the face and voice. Although speech is an audiovisual event, it has been studied primarily as a unimodal auditory stream without the synchronized dynamic face of the speaker. According to the intersensory redundancy hypothesis, redundancy across the senses facilitates perceptual learning of amodal information, including prosody. We predicted that young infants who are still learning to discriminate and categorize prosodic information would detect prosodic changes better in the presence of intersensory redundancy (i.e., synchronous audiovisual speech) than in its absence (i.e., unimodal auditory or asynchronous audiovisual speech). To test this hypothesis, 72 4-month-old infants were habituated to recordings of women reciting passages in IDS with prosody conveying either approval or prohibition and then were tested with recordings of a novel passage with either a change or no change in prosody. Infants who received bimodal synchronous stimulation exhibited significant visual recovery to the novel passage with a change in prosody, but not to a novel passage with no change in prosody. Infants in the unimodal auditory and bimodal asynchronous conditions did not exhibit visual recovery in either condition. Results support the hypothesis that intersensory redundancy facilitates detection and abstraction of invariant prosody across changes in linguistic content and likely serves as an early foundation for the detection of meaning in fluent speech.
Collapse
Affiliation(s)
- Lorraine E Bahrick
- Department of Psychology, Florida International University, Miami, FL 33199, USA.
| | - Myriah E McNew
- Department of Psychology, Florida International University, Miami, FL 33199, USA.
| | - Shannon M Pruden
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - Irina Castellanos
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH 43212, USA
| |
Collapse
|
24
|
Fagan MK. Exploring in Silence: Hearing and Deaf Infants Explore Objects Differently before Cochlear Implantation. INFANCY 2019; 24:338-355. [PMID: 31768147 PMCID: PMC6876862 DOI: 10.1111/infa.12281] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 12/02/2018] [Indexed: 11/29/2022]
Abstract
Infant development has rarely been informed by the behavior of infants with sensory differences despite increasing recognition that infant behavior itself creates sensory learning opportunities. The purpose of this study of object exploration was to compare the behavior of hearing and deaf infants, with and without cochlear implants, in order to identify the effects of profound sensorineural hearing loss on infant exploration before cochlear implantation, the behavioral effects of access to auditory feedback after cochlear implantation, and the sensory motivation for exploration behaviors performed by hearing infants as well. The results showed that 9-month-old deaf infants explored objects as often as hearing infants but they used systematically different approaches and less variation before compared to after cochlear implantation. Potential associations between these early experiences and later learning are discussed in the context of embodied developmental theory, comparative studies, and research with adults. The data call for increased recognition of the active sensorimotor nature of infant learning and future research that investigates differences in sensorimotor experience as potential mechanisms in later learning and sequential memory development.
Collapse
Affiliation(s)
- Mary K Fagan
- Department of Communication Sciences and Disorders, Chapman University
| |
Collapse
|
25
|
Rigato S, Banissy MJ, Romanska A, Thomas R, van Velzen J, Bremner AJ. Cortical signatures of vicarious tactile experience in four-month-old infants. Dev Cogn Neurosci 2019; 35:75-80. [PMID: 28942240 PMCID: PMC6968956 DOI: 10.1016/j.dcn.2017.09.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/30/2017] [Accepted: 09/11/2017] [Indexed: 11/29/2022] Open
Abstract
The human brain recruits similar brain regions when a state is experienced (e.g., touch, pain, actions) and when that state is passively observed in other individuals. In adults, seeing other people being touched activates similar brain areas as when we experience touch ourselves. Here we show that already by four months of age, cortical responses to tactile stimulation are modulated by visual information specifying another person being touched. We recorded somatosensory evoked potentials (SEPs) in 4-month-old infants while they were presented with brief vibrotactile stimuli to the hands. At the same time that the tactile stimuli were presented the infants observed another person's hand being touched by a soft paintbrush or approached by the paintbrush which then touched the surface next to their hand. A prominent positive peak in SEPs contralateral to the site of tactile stimulation around 130 ms after the tactile stimulus onset was of a significantly larger amplitude for the "Surface" trials than for the "Hand" trials. These findings indicate that, even at four months of age, somatosensory cortex is not only involved in the personal experience of touch but can also be vicariously recruited by seeing other people being touched.
Collapse
Affiliation(s)
- Silvia Rigato
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK
| | - Michael J Banissy
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Aleksandra Romanska
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Rhiannon Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - José van Velzen
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK.
| |
Collapse
|
26
|
Curtindale LM, Bahrick LE, Lickliter R, Colombo J. Effects of multimodal synchrony on infant attention and heart rate during events with social and nonsocial stimuli. J Exp Child Psychol 2019; 178:283-294. [PMID: 30445204 PMCID: PMC6980371 DOI: 10.1016/j.jecp.2018.10.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 09/16/2018] [Accepted: 10/16/2018] [Indexed: 11/25/2022]
Abstract
Attention is a state of readiness or alertness, associated with behavioral and psychophysiological responses, that facilitates learning and memory. Multisensory and dynamic events have been shown to elicit more attention and produce greater sustained attention in infants than auditory or visual events alone. Such redundant and often temporally synchronous information guides selectivity and facilitates perception, learning, and memory of properties of events specified by redundancy. In addition, events involving faces or other social stimuli provide an extraordinary amount of redundant information that attracts and sustains attention. In the current study, 4- and 8-month-old infants were shown 2-min multimodal videos featuring social or nonsocial stimuli to determine the relative roles of synchrony and stimulus category in inducing attention. Behavioral measures included average looking time and peak look duration, and convergent measurement of heart rate (HR) allowed for the calculation of HR-defined phases of attention: Orienting (OR), sustained attention (SA), and attention termination (AT). The synchronous condition produced an earlier onset of SA (less time in OR) and a deeper state of SA than the asynchronous condition. Social stimuli attracted and held attention (longer duration of peak looks and lower HR than nonsocial stimuli). Effects of synchrony and the social nature of stimuli were additive, suggesting independence of their influence on attention. These findings are the first to demonstrate different HR-defined phases of attention as a function of intersensory redundancy, suggesting greater salience and deeper processing of naturalistic synchronous audiovisual events compared with asynchronous ones.
Collapse
Affiliation(s)
- Lori M Curtindale
- Department of Psychology, East Carolina University, Greenville, NC 27858, USA.
| | - Lorraine E Bahrick
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - Robert Lickliter
- Department of Psychology, Florida International University, Miami, FL 33199, USA
| | - John Colombo
- Department of Psychology, University of Kansas, Lawrence, KS 66045, USA
| |
Collapse
|
27
|
Bahrick LE, Soska KC, Todd JT. Assessing individual differences in the speed and accuracy of intersensory processing in young children: The intersensory processing efficiency protocol. Dev Psychol 2018; 54:2226-2239. [PMID: 30346188 PMCID: PMC6261800 DOI: 10.1037/dev0000575] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detecting intersensory redundancy guides cognitive, social, and language development. Yet, researchers lack fine-grained, individual difference measures needed for studying how early intersensory skills lead to later outcomes. The intersensory processing efficiency protocol (IPEP) addresses this need. Across a number of brief trials, participants must find a sound-synchronized visual target event (social, nonsocial) amid five visual distractor events, simulating the "noisiness" of natural environments. Sixty-four 3- to 5-year-old children were tested using remote eye-tracking. Children showed intersensory processing by attending to the sound-synchronous event more frequently and longer than in a silent visual control, and more frequently than expected by chance. The IPEP provides a fine-grained, nonverbal method for characterizing individual differences in intersensory processing appropriate for infants and children. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Kasey C Soska
- Department of Psychology, Florida International University
| | | |
Collapse
|
28
|
Bahrick LE, Todd JT, Soska KC. The Multisensory Attention Assessment Protocol (MAAP): Characterizing individual differences in multisensory attention skills in infants and children and relations with language and cognition. Dev Psychol 2018; 54:2207-2225. [PMID: 30359058 PMCID: PMC6263835 DOI: 10.1037/dev0000594] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multisensory attention skills provide a crucial foundation for early cognitive, social, and language development, yet there are no fine-grained, individual difference measures of these skills appropriate for preverbal children. The Multisensory Attention Assessment Protocol (MAAP) fills this need. In a single video-based protocol requiring no language skills, the MAAP assesses individual differences in three fundamental building blocks of attention to multisensory events-the duration of attention maintenance, the accuracy of intersensory (audiovisual) matching, and the speed of shifting-for both social and nonsocial events, in the context of high and low competing visual stimulation. In Experiment 1, 2- to 5-year-old children (N = 36) received the MAAP and assessments of language and cognitive functioning. In Experiment 2 the procedure was streamlined and presented to 12-month-olds (N = 48). Both infants and children showed high levels of attention maintenance to social and nonsocial events, impaired attention maintenance and speed of shifting when competing stimulation was high, and significant intersensory matching. Children showed longer maintenance, faster shifting, and less impairment from competing stimulation than infants. In 2- to 5-year-old children, duration and accuracy were intercorrelated, showed increases with age, and predicted cognitive and language functioning. The MAAP opens the door to assessing developmental pathways between early attention patterns to audiovisual events and language, cognitive, and social development. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
29
|
Flom R, Bahrick LE, Pick AD. Infants Discriminate the Affective Expressions of their Peers: The Roles of Age and Familiarization Time. INFANCY 2018; 23:692-707. [PMID: 30271279 DOI: 10.1111/infa.12246] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Research examining infants' discrimination of affect often uses unfamiliar faces and voices of adults. Recently, research has examined infant discrimination of affect in familiar faces and voices. In much of this research, infants were habituated to the affective expressions using a "standard" 50% habituation criterion. We extend this line of research by examining infants' discrimination of unfamiliar peers', that is, 4-month-olds, dynamic, facial, and vocal affective expressions and assessing how discrimination is affected by changing the habituation criterion. In two experiments, using an infant-controlled habituation design, we explored 3- and 5-month-olds' discrimination of their peers' dynamic audiovisual displays of positive and negative expressions of affect. Results of Experiment 1, using a 50% habituation criterion, revealed that 5-month-olds, but not 3-month-olds discriminated the affective expressions of their peers. In Experiment 2, we examined whether 3-month-olds' lack of discrimination in Experiment 1 was a result of insufficient habituation (i.e., familiarization). Specifically, 3-month-olds were habituated using a 70% habituation criterion, providing them with longer familiarization time. Results revealed that using the more stringent habituation criterion, 3-month-olds showed longer habituation times, that is increased familiarization, and discriminated their peers' affective expressions. Results are discussed in terms of infants' discrimination of affect, the role of familiarization time, and limitations of the 50% habituation criterion.
Collapse
Affiliation(s)
- Ross Flom
- Department of Psychology, Southern Utah University
| | | | - Anne D Pick
- Institute of Child Development, University of Minnesota
| |
Collapse
|
30
|
Buss AT, Ross-Sheehy S, Reynolds GD. Visual working memory in early development: a developmental cognitive neuroscience perspective. J Neurophysiol 2018; 120:1472-1483. [PMID: 29897858 DOI: 10.1152/jn.00087.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In this article, we review the literature on the development of visual working memory (VWM). We focus on two major periods of development, infancy and early childhood. First, we discuss the innovative methods that have been devised to understand how the development of selective attention and perception provide the foundation of VWM abilities. We detail the behavioral and neural data associated with the development of VWM during infancy. Next, we discuss various signatures of development in VWM during early childhood in the context of spatial and featural memory processes. We focus on the developmental transition to more adult-like VWM properties. Finally, we discuss computational frameworks that have explained the complex patterns of behavior observed in VWM tasks from infancy to adulthood and attempt to explain links between measures of infant VWM and childhood VWM.
Collapse
Affiliation(s)
- Aaron T Buss
- Department of Psychology, University of Tennessee , Knoxville, Tennessee
| | | | - Greg D Reynolds
- Department of Psychology, University of Tennessee , Knoxville, Tennessee
| |
Collapse
|
31
|
Dibavar MR. Infants' intermodal numerical knowledge. Infant Behav Dev 2018; 52:32-44. [PMID: 29807236 DOI: 10.1016/j.infbeh.2018.04.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 04/28/2018] [Accepted: 04/30/2018] [Indexed: 11/28/2022]
Abstract
Two-system theory as the dominant approach in the field of infant numerical representation is characterized by three features: precise representation of small sets of objects, approximate representation of large magnitudes and failure to compare small and large sets. Comparison of single- and multimodal numerical abilities suggests that infants' performance in multimodal conditions is consistent with these three features. Nevertheless, the influence of multimodal stimulation on infants' numerical representation is characterized by preventing the formation of perceptual overlaps across different sensory modalities which can lead to an understanding of numerical values of small sets and also by creating a conceptual overlap about numbers that increases infants' accuracy for discriminating quantities when numerical information is presented bimodally and synchronously. Such multisensory benefits provide numerical capabilities beyond what is depicted by the two-system view.
Collapse
|
32
|
Reynolds GD, Roth KC. The Development of Attentional Biases for Faces in Infancy: A Developmental Systems Perspective. Front Psychol 2018; 9:222. [PMID: 29541043 PMCID: PMC5835799 DOI: 10.3389/fpsyg.2018.00222] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Accepted: 02/09/2018] [Indexed: 11/16/2022] Open
Abstract
We present an integrative review of research and theory on major factors involved in the early development of attentional biases to faces. Research utilizing behavioral, eye-tracking, and neuroscience measures with infant participants as well as comparative research with animal subjects are reviewed. We begin with coverage of research demonstrating the presence of an attentional bias for faces shortly after birth, such as newborn infants' visual preference for face-like over non-face stimuli. The role of experience and the process of perceptual narrowing in face processing are examined as infants begin to demonstrate enhanced behavioral and neural responsiveness to mother over stranger, female over male, own- over other-race, and native over non-native faces. Next, we cover research on developmental change in infants' neural responsiveness to faces in multimodal contexts, such as audiovisual speech. We also explore the potential influence of arousal and attention on early perceptual preferences for faces. Lastly, the potential influence of the development of attention systems in the brain on social-cognitive processing is discussed. In conclusion, we interpret the findings under the framework of Developmental Systems Theory, emphasizing the combined and distributed influence of several factors, both internal (e.g., arousal, neural development) and external (e.g., early social experience) to the developing child, in the emergence of attentional biases that lead to enhanced responsiveness and processing of faces commonly encountered in the native environment.
Collapse
Affiliation(s)
- Greg D. Reynolds
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, United States
| | | |
Collapse
|
33
|
Szokolszky A, Read C. Developmental Ecological Psychology and a Coalition of Ecological–Relational Developmental Approaches. ECOLOGICAL PSYCHOLOGY 2018. [DOI: 10.1080/10407413.2018.1410409] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Catherine Read
- Department of Plant Biology, Rutgers University
- Department of Psychology, Ithaca College
| |
Collapse
|
34
|
Lickliter R, Bahrick LE, Vaillant-Mekras J. The intersensory redundancy hypothesis: Extending the principle of unimodal facilitation to prenatal development. Dev Psychobiol 2017; 59:910-915. [PMID: 28833041 PMCID: PMC5630509 DOI: 10.1002/dev.21551] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 07/10/2017] [Indexed: 11/06/2022]
Abstract
Selective attention to different properties of stimulation provides the foundation for perception, learning, and memory. The Intersensory Redundancy Hypothesis (IRH) proposes that early in development information presented redundantly across two or more modalities (multimodal) selectively recruits attention to and enhances perceptual learning of amodal properties, whereas information presented to a single sense modality (unimodal) enhances perceptual learning of modality-specific properties. The present study is the first to assess this principle of unimodal facilitation in non-human animals in prenatal development. We assessed bobwhite quail embryos' prenatal detection of pitch, a modality-specific property, under conditions of unimodal and bimodal (synchronous or asynchronous) exposure. Chicks exposed to prenatal unimodal auditory stimulation or asynchronous bimodal (audiovisual) stimulation preferred the familiarized maternal call over a novel pitch-modified maternal call following hatching, whereas chicks exposed to redundant (synchronous) audiovisual stimulation failed to prefer the familiar call over the pitch-modified call. These results provide further evidence that selective attention is recruited to specific stimulus properties of events in early development and that these biases are evident even during the prenatal period.
Collapse
|
35
|
Lew-Williams C, Ferguson B, Abu-Zhaya R, Seidl A. Social touch interacts with infants' learning of auditory patterns. Dev Cogn Neurosci 2017; 35:66-74. [PMID: 29051028 PMCID: PMC5876072 DOI: 10.1016/j.dcn.2017.09.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 08/22/2017] [Accepted: 09/04/2017] [Indexed: 11/20/2022] Open
Abstract
Infants use intersensory redundancy provided by social touch to learn auditory patterns. There is wide variation in the frequency of different patterns of touch from caregivers. Less frequent patterns of touch may be more likely to enhance attention and learning. The findings suggest that infants track patterns of touch in naturalistic input from caregivers.
Infants’ experiences are defined by the presence of concurrent streams of perceptual information in social environments. Touch from caregivers is an especially pervasive feature of early development. Using three lab experiments and a corpus of naturalistic caregiver-infant interactions, we examined the relevance of touch in supporting infants’ learning of structure in an altogether different modality: audition. In each experiment, infants listened to sequences of sine-wave tones following the same abstract pattern (e.g., ABA or ABB) while receiving time-locked touch sequences from an experimenter that provided either informative or uninformative cues to the pattern (e.g., knee-elbow-knee or knee-elbow-elbow). Results showed that intersensorily redundant touch supported infants’ learning of tone patterns, but learning varied depending on the typicality of touch sequences in infants’ lives. These findings suggest that infants track touch sequences from moment to moment and in aggregate from their caregivers, and use the intersensory redundancy provided by touch to discover patterns in their environment.
Collapse
Affiliation(s)
| | | | - Rana Abu-Zhaya
- Department of Speech, Language, and Hearing Sciences, Purdue University, USA.
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University, USA.
| |
Collapse
|
36
|
Thomas RL, Misra R, Akkunt E, Ho C, Spence C, Bremner AJ. Sensitivity to auditory-tactile colocation in early infancy. Dev Sci 2017; 21:e12597. [PMID: 28880496 DOI: 10.1111/desc.12597] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Accepted: 06/08/2017] [Indexed: 11/27/2022]
Abstract
An ability to detect the common location of multisensory stimulation is essential for us to perceive a coherent environment, to represent the interface between the body and the external world, and to act on sensory information. Regarding the tactile environment "at hand", we need to represent somatosensory stimuli impinging on the skin surface in the same spatial reference frame as distal stimuli, such as those transduced by vision and audition. Across two experiments we investigated whether 6- (n = 14; Experiment 1) and 4-month-old (n = 14; Experiment 2) infants were sensitive to the colocation of tactile and auditory signals delivered to the hands. We recorded infants' visual preferences for spatially congruent and incongruent auditory-tactile events delivered to their hands. At 6 months, infants looked longer toward incongruent stimuli, whilst at 4 months infants looked longer toward congruent stimuli. Thus, even from 4 months of age, infants are sensitive to the colocation of simultaneously presented auditory and tactile stimuli. We conclude that 4- and 6-month-old infants can represent auditory and tactile stimuli in a common spatial frame of reference. We explain the age-wise shift in infants' preferences from congruent to incongruent in terms of an increased preference for novel crossmodal spatial relations based on the accumulation of experience. A comparison of looking preferences across the congruent and incongruent conditions with a unisensory control condition indicates that the ability to perceive auditory-tactile colocation is based on a crossmodal rather than a supramodal spatial code by 6 months of age at least.
Collapse
Affiliation(s)
- Rhiannon L Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, UK
| | - Reeva Misra
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Emine Akkunt
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, UK
| | - Cristy Ho
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, UK
| |
Collapse
|
37
|
Herten N, Otto T, Wolf OT. The role of eye fixation in memory enhancement under stress – An eye tracking study. Neurobiol Learn Mem 2017; 140:134-144. [DOI: 10.1016/j.nlm.2017.02.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Revised: 01/27/2017] [Accepted: 02/08/2017] [Indexed: 10/20/2022]
|
38
|
Patten E, Labban JD, Casenhiser DM, Cotton CL. Synchrony Detection of Linguistic Stimuli in the Presence of Faces: Neuropsychological Implications for Language Development in ASD. Dev Neuropsychol 2017; 41:362-374. [PMID: 28059555 DOI: 10.1080/87565641.2016.1243113] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Children with autism spectrum disorders (ASD) may be impaired in their ability to detect audiovisual synchrony and their ability may be influenced by the nature of the stimuli. We investigated the possibility that synchrony detection is disrupted by the presence of human faces by testing children with ASD using a preferential looking language-based paradigm. Children with low language abilities were significantly worse at detecting synchrony when the stimuli include an unobscured face than when the face was obscured. Findings suggest that the presence of faces may make multisensory processing more difficult. Implications for interventions are discussed, particularly those targeting attention to faces.
Collapse
Affiliation(s)
- Elena Patten
- a Department of Audiology and Speech Pathology , The University of Tennessee Health Science Center , Knoxville , Tennessee
| | - Jeffrey D Labban
- b Department of Kinesiology , University of North Carolina at Greensboro , Greensboro , North Carolina
| | - Devin M Casenhiser
- a Department of Audiology and Speech Pathology , The University of Tennessee Health Science Center , Knoxville , Tennessee
| | - Catherine L Cotton
- c Department of Communication Sciences & Disorders , University of North Carolina at Greensboro , Greensboro , North Carolina
| |
Collapse
|
39
|
Shinskey JL. Sound effects: Multimodal input helps infants find displaced objects. BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY 2016; 35:317-333. [PMID: 27868211 DOI: 10.1111/bjdp.12165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Revised: 08/17/2016] [Indexed: 11/28/2022]
Abstract
Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year.
Collapse
Affiliation(s)
- Jeanne L Shinskey
- Royal Holloway, University of London, UK.,University of South Carolina, Columbia, South Carolina, USA
| |
Collapse
|
40
|
Bahrick LE, Todd JT, Castellanos I, Sorondo BM. Enhanced attention to speaking faces versus other event types emerges gradually across infancy. Dev Psychol 2016; 52:1705-1720. [PMID: 27786526 PMCID: PMC5291072 DOI: 10.1037/dev0000157] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The development of attention to dynamic faces versus objects providing synchronous audiovisual versus silent visual stimulation was assessed in a large sample of infants. Maintaining attention to the faces and voices of people speaking is critical for perceptual, cognitive, social, and language development. However, no studies have systematically assessed when, if, or how attention to speaking faces emerges and changes across infancy. Two measures of attention maintenance, habituation time (HT) and look-away rate (LAR), were derived from cross-sectional data of 2- to 8-month-old infants (N = 801). Results indicated that attention to audiovisual faces and voices was maintained across age, whereas attention to each of the other event types (audiovisual objects, silent dynamic faces, silent dynamic objects) declined across age. This reveals a gradually emerging advantage in attention maintenance (longer HTs, lower LARs) for audiovisual speaking faces compared with the other 3 event types. At 2 months, infants showed no attentional advantage for faces (with greater attention to audiovisual than to visual events); at 3 months, they attended more to dynamic faces than objects (in the presence or absence of voices), and by 4 to 5 and 6 to 8 months, significantly greater attention emerged to temporally coordinated faces and voices of people speaking compared with all other event types. Our results indicate that selective attention to coordinated faces and voices over other event types emerges gradually across infancy, likely as a function of experience with multimodal, redundant stimulation from person and object events. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Irina Castellanos
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, Columbus, OH
| | - Barbara M. Sorondo
- Flordia International University Libraries, Florida International University, Miami, FL
| |
Collapse
|
41
|
Kretch KS, Adolph KE. The organization of exploratory behaviors in infant locomotor planning. Dev Sci 2016; 20. [PMID: 27147103 DOI: 10.1111/desc.12421] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 01/14/2016] [Indexed: 12/01/2022]
Abstract
How do infants plan and guide locomotion under challenging conditions? This experiment investigated the real-time process of visual and haptic exploration in 14-month-old infants as they decided whether and how to walk over challenging terrain - a series of bridges varying in width. Infants' direction of gaze was recorded with a head-mounted eye tracker and their haptic exploration and locomotor actions were captured on video. Infants' exploration was an organized, efficient sequence of visual, haptic, and locomotor behaviors. They used visual exploration from a distance as an initial assessment on nearly every bridge. Visual information subsequently prompted gait modifications while approaching narrow bridges and haptic exploration at the edge of the bridge. Results confirm predictions about the sequential, ramping-up process of exploration and the distinct roles of vision and touch. Exploration, however, was not a guarantee of adaptive decisions. With walking experience, exploratory behaviors became increasingly efficient and infants were better able to interpret the resulting perceptual information in terms of whether it was safe to walk.
Collapse
|
42
|
Zmigrod L, Zmigrod S. On the Temporal Precision of Thought: Individual Differences in the Multisensory Temporal Binding Window Predict Performance on Verbal and Nonverbal Problem Solving Tasks. Multisens Res 2016. [DOI: 10.1163/22134808-00002532] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Although psychology is greatly preoccupied by the tight link between the way that individuals perceive the world and their intelligent, creative behavior, there is little experimental work on the relationship between individual differences in perception and cognitive ability in healthy populations. Here, individual differences in problem solving ability were examined in relation to multisensory perception as measured by tolerance for temporal asynchrony between auditory and visual inputs, i.e., the multisensory temporal binding window. The results demonstrated that enhanced performance in both verbal and nonverbal problem solving tasks (the Remote Associates Test and Raven’s Advanced Progressive Matrices Task) is predicted by a narrower audio-visual temporal binding window, which reflects greater sensitivity to subtle discrepancies in sensory inputs. This suggests that the precision of individuals’ temporal window of multisensory integration might mirror their capacities for complex reasoning and thus the precision of their thoughts.
Collapse
Affiliation(s)
- Leor Zmigrod
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sharon Zmigrod
- Institute for Psychological Research & Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
| |
Collapse
|
43
|
Bahrick LE, Lickliter R, Castellanos I, Todd JT. Intrasensory Redundancy Facilitates Infant Detection of Tempo: Extending Predictions of the Intersensory Redundancy Hypothesis. INFANCY 2015; 20:377-404. [PMID: 26207101 PMCID: PMC4508026 DOI: 10.1111/infa.12081] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 02/23/2015] [Indexed: 11/26/2022]
Abstract
Research has demonstrated that intersensory redundancy (stimulation synchronized across multiple senses) is highly salient and facilitates processing of amodal properties in multimodal events, bootstrapping early perceptual development. The present study is the first to extend this central principle of the intersensory redundancy hypothesis (IRH) to certain types of intrasensory redundancy (stimulation synchronized within a single sense). Infants were habituated to videos of a toy hammer tapping silently (unimodal control), depicting intersensory redundancy (synchronized with a soundtrack) or intrasensory redundancy (synchronized with another visual event; light flashing or bat tapping). In Experiment 1, 2-month-olds showed both intersensory and intrasensory facilitation (with respect to the unimodal control) for detecting a change in tempo. However, intrasensory facilitation was found when the hammer was synchronized with the light flashing (different motion) but not with the bat tapping (same motion). Experiment 2 tested 3-month-olds using a somewhat easier tempo contrast. Results supported a similarity hypothesis: intrasensory redundancy between two dissimilar events was more effective than that between two similar events for promoting processing of amodal properties. These findings extend the IRH and indicate that in addition to intersensory redundancy, intrasensory redundancy between two synchronized dissimilar visual events is also effective in promoting perceptual processing of amodal event properties.
Collapse
Affiliation(s)
| | - Robert Lickliter
- Department of Psychology, Florida International University, Miami, FL
| | - Irina Castellanos
- Department of Otolaryngology Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN
| | | |
Collapse
|
44
|
Intersensory redundancy promotes visual rhythm discrimination in visually impaired infants. Infant Behav Dev 2015; 39:92-7. [PMID: 25827259 DOI: 10.1016/j.infbeh.2015.02.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2014] [Revised: 02/04/2015] [Accepted: 02/10/2015] [Indexed: 11/22/2022]
Abstract
Infants' attention is captured by the redundancy of amodal stimulation in multimodal objects and events. Evidence from this study demonstrates that intersensory redundancy can facilitate discrimination of rhythm changes presented in the visual modality alone in visually impaired infants, suggesting that multisensory rehabilitation strategies could prove helpful in this population.
Collapse
|