1
|
Bothe R, Eiteljoerge S, Trouillet L, Elsner B, Mani N. Better in sync: Temporal dynamics explain multisensory word-action-object learning in early development. INFANCY 2024; 29:482-509. [PMID: 38520389 DOI: 10.1111/infa.12590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/25/2024]
Abstract
We investigated the temporal impact of multisensory settings on children's learning of word-object and action-object associations at 1- and 2-years of age. Specifically, we examined whether the temporal alignment of words and actions influenced the acquisition of novel word-action-object associations. We used a preferential looking and violation of expectation task in which infants and young children were first presented with two distinct word-object and action-object pairings either in a synchronous (overlapping in time) or sequential manner (one after the other). Findings revealed that 2-year-olds recognized both, action-object and word-object associations when they first saw the word-action-object combinations synchronously, but not sequentially, as evidenced by looking behavior. 1-year-olds did not show evidence for recognition for either of the word-object and action-object pairs, regardless of the initial temporal alignment of these cues. To control for individual differences, we explored factors that might influence associative learning based on parental reports of 1- and 2-year-olds development, however, developmental measures did not explain word-action-object associative learning in either group. We discuss that while young children may benefit from the temporal alignment of multisensory cues as it enables them to actively engage with the multisensory content in real-time, infants may have been overwhelmed by the complexity of this input.
Collapse
Affiliation(s)
- Ricarda Bothe
- Psychology of Language, Georg-August University Goettingen, Goettingen, Germany
- Leibniz ScienceCampus "Primate Cognition", Goettingen, Germany
| | - Sarah Eiteljoerge
- Psychology of Language, Georg-August University Goettingen, Goettingen, Germany
- Leibniz ScienceCampus "Primate Cognition", Goettingen, Germany
| | - Leonie Trouillet
- Developmental Psychology, University of Potsdam, Potsdam, Germany
| | - Birgit Elsner
- Developmental Psychology, University of Potsdam, Potsdam, Germany
| | - Nivedita Mani
- Psychology of Language, Georg-August University Goettingen, Goettingen, Germany
- Leibniz ScienceCampus "Primate Cognition", Goettingen, Germany
| |
Collapse
|
2
|
Schroer SE, Yu C. Word learning is hands-on: Insights from studying natural behavior. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:55-79. [PMID: 39074925 DOI: 10.1016/bs.acdb.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Infants' interactions with social partners are richly multimodal. Dyads respond to and coordinate their visual attention, gestures, vocalizations, speech, manual actions, and manipulations of objects. Although infants are typically described as active learners, previous experimental research has often focused on how infants learn from stimuli that is well-crafted by researchers. Recent research studying naturalistic, free-flowing interactions has explored the meaningful patterns in dyadic behavior that relate to language learning. Infants' manual engagement and exploration of objects supports their visual attention, creates salient and diverse views of objects, and elicits labeling utterances from parents. In this chapter, we discuss how the cascade of behaviors created by infant multimodal attention plays a fundamental role in shaping their learning environment, supporting real-time word learning and predicting later vocabulary size. We draw from recent at-home and cross-cultural research to test the validity of our mechanistic pathway and discuss why hands matter so much for learning. Our goal is to convey the critical need for developmental scientists to study natural behavior and move beyond our "tried-and-true" paradigms, like screen-based tasks. By studying natural behavior, the role of infants' hands in early language learning was revealed-though it was a behavior that was often uncoded, undiscussed, or not even allowed in decades of previous research. When we study infants in their natural environment, they can show us how they learn about and explore their world. Word learning is hands-on.
Collapse
Affiliation(s)
- Sara E Schroer
- The Center for Perceptual Systems, The University of Texas at Austin; Department of Psychology, The University of Texas at Austin.
| | - Chen Yu
- The Center for Perceptual Systems, The University of Texas at Austin; Department of Psychology, The University of Texas at Austin
| |
Collapse
|
3
|
Seidl AH, Indarjit M, Borovsky A. Touch to learn: Multisensory input supports word learning and processing. Dev Sci 2024; 27:e13419. [PMID: 37291692 PMCID: PMC10704002 DOI: 10.1111/desc.13419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 04/14/2023] [Accepted: 05/22/2023] [Indexed: 06/10/2023]
Abstract
Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning.
Collapse
Affiliation(s)
- Amanda H Seidl
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Michelle Indarjit
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Arielle Borovsky
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
4
|
Sclafani V, De Pascalis L, Bozicevic L, Sepe A, Ferrari PF, Murray L. Similarities and differences in the functional architecture of mother- infant communication in rhesus macaque and British mother-infant dyads. Sci Rep 2023; 13:13164. [PMID: 37574499 PMCID: PMC10423724 DOI: 10.1038/s41598-023-39623-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 07/27/2023] [Indexed: 08/15/2023] Open
Abstract
Similarly to humans, rhesus macaques engage in mother-infant face-to-face interactions. However, no previous studies have described the naturally occurring structure and development of mother-infant interactions in this population and used a comparative-developmental perspective to directly compare them to the ones reported in humans. Here, we investigate the development of infant communication, and maternal responsiveness in the two groups. We video-recorded mother-infant interactions in both groups in naturalistic settings and analysed them with the same micro-analytic coding scheme. Results show that infant social expressiveness and maternal responsiveness are similarly structured in humans and macaques. Both human and macaque mothers use specific mirroring responses to specific infant social behaviours (modified mirroring to communicative signals, enriched mirroring to affiliative gestures). However, important differences were identified in the development of infant social expressiveness, and in forms of maternal responsiveness, with vocal responses and marking behaviours being predominantly human. Results indicate a common functional architecture of mother-infant communication in humans and monkeys, and contribute to theories concerning the evolution of specific traits of human behaviour.
Collapse
Affiliation(s)
- V Sclafani
- Winnicott Research Unit, Department of Psychology, University of Reading, Reading, UK.
- College of Social Sciences, School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
| | - L De Pascalis
- Winnicott Research Unit, Department of Psychology, University of Reading, Reading, UK.
- Department of Psychology, Institute of Population Health, University of Liverpool, Liverpool, UK.
- Department of Psychology, University of Bologna, Bologna, Italy.
| | - L Bozicevic
- Winnicott Research Unit, Department of Psychology, University of Reading, Reading, UK
- Department of Primary Care & Mental Health, Institute of Population Health, University of Liverpool, Liverpool, Merseyside, UK
| | - A Sepe
- Department of Medicine and Surgery, University of Parma, Parma, Italy
- Laboratory of Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, Belgium
| | - P F Ferrari
- Department of Medicine and Surgery, University of Parma, Parma, Italy
- Institut des Sciences Cognitives 'Marc Jeannerod', CNRS, Bron, and Université Claude Bernard Lyon 1, Lyon, France
| | - L Murray
- Winnicott Research Unit, Department of Psychology, University of Reading, Reading, UK
| |
Collapse
|
5
|
Edgar EV, Todd JT, Bahrick LE. Intersensory processing of faces and voices at 6 months predicts language outcomes at 18, 24, and 36 months of age. INFANCY 2023; 28:569-596. [PMID: 36760157 PMCID: PMC10564323 DOI: 10.1111/infa.12533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/04/2023] [Accepted: 01/13/2023] [Indexed: 02/11/2023]
Abstract
Intersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine-grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12-month intersensory processing of face-voice synchrony predicts language outcomes at 18- and 24-months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine-grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3- and 6-months predicts language outcomes at 12-, 18-, 24-, and 36-months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6-months (but not 3-months) accounted for significant unique variance in language outcomes at 18-, 24-, and 36-months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face-voice synchrony as a foundation for language development as early as 6-months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5-years later.
Collapse
|
6
|
Ko ES, Abu-Zhaya R, Kim ES, Kim T, On KW, Kim H, Zhang BT, Seidl A. Mothers' use of touch across infants' development and its implications for word learning: Evidence from Korean dyadic interactions. INFANCY 2023; 28:597-618. [PMID: 36757022 PMCID: PMC10085827 DOI: 10.1111/infa.12532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 01/05/2023] [Accepted: 01/15/2023] [Indexed: 02/10/2023]
Abstract
Caregivers' touches that occur alongside words and utterances could aid in the detection of word/utterance boundaries and the mapping of word forms to word meanings. We examined changes in caregivers' use of touches with their speech directed to infants using a multimodal cross-sectional corpus of 35 Korean mother-child dyads across three age groups of infants (8, 14, and 27 months). We tested the hypothesis that caregivers' frequency and use of touches with speech change with infants' development. Results revealed that the frequency of word/utterance-touch alignment as well as word + touch co-occurrence is highest in speech addressed to the youngest group of infants. Thus, this study provides support for the hypothesis that caregivers' use of touch during dyadic interactions is sensitive to infants' age in a way similar to caregivers' use of speech alone and could provide cues useful to infants' language learning at critical points in early development.
Collapse
Affiliation(s)
- Eon-Suk Ko
- Department of English Language and Literature, Chosun University
| | | | - Eun-Sol Kim
- Department of Computer Science, Hanyang University
| | | | | | - Hyunji Kim
- Department of English Language and Literature, Chosun University
| | - Byoung-Tak Zhang
- Department of Computer Science and Engineering & SNU Artificial Intelligence Institute, Seoul National University
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
7
|
Tan SHJ, Kalashnikova M, Burnham D. Seeing a talking face matters: Infants' segmentation of continuous auditory-visual speech. INFANCY 2023; 28:277-300. [PMID: 36217702 DOI: 10.1111/infa.12509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Visual speech cues from a speaker's talking face aid speech segmentation in adults, but despite the importance of speech segmentation in language acquisition, little is known about the possible influence of visual speech on infants' speech segmentation. Here, to investigate whether there is facilitation of speech segmentation by visual information, two groups of English-learning 7-month-old infants were presented with continuous speech passages, one group with auditory-only (AO) speech and the other with auditory-visual (AV) speech. Additionally, the possible relation between infants' relative attention to the speaker's mouth versus eye regions and their segmentation performance was examined. Both the AO and the AV groups of infants successfully segmented words from the continuous speech stream, but segmentation performance persisted for longer for infants in the AV group. Interestingly, while AV group infants showed no significant relation between the relative amount of time spent fixating the speaker's mouth versus eyes and word segmentation, their attention to the mouth was greater than that of AO group infants, especially early in test trials. The results are discussed in relation to the possible pathways through which visual speech cues aid speech perception.
Collapse
Affiliation(s)
- Sok Hui Jessica Tan
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Milpera, New South Wales, Australia.,Office of Education Research, National Institute of Education, Nanyang Technological University, Singapore, Singapore
| | - Marina Kalashnikova
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Milpera, New South Wales, Australia.,The Basque Centre on Cognition, Brain and Language, San Sebastián, Basque Country, Spain.,IKERBASQUE, Basque Foundation for Science, San Sebastián, Basque Country, Spain
| | - Denis Burnham
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Milpera, New South Wales, Australia
| |
Collapse
|
8
|
Lee C, Lew‐Williams C. The dynamic functions of social cues during children's word learning. INFANT AND CHILD DEVELOPMENT 2022. [DOI: 10.1002/icd.2372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Crystal Lee
- Department of Psychology Princeton University Princeton New Jersey USA
| | | |
Collapse
|
9
|
Sun L, Griep CD, Yoshida H. Shared Multimodal Input Through Social Coordination: Infants With Monolingual and Bilingual Learning Experiences. Front Psychol 2022; 13:745904. [PMID: 35519632 PMCID: PMC9066094 DOI: 10.3389/fpsyg.2022.745904] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
A growing number of children in the United States are exposed to multiple languages at home from birth. However, relatively little is known about the early process of word learning—how words are mapped to the referent in their child-centered learning experiences. The present study defined parental input operationally as the integrated and multimodal learning experiences as an infant engages with his/her parent in an interactive play session with objects. By using a head-mounted eye tracking device, we recorded visual scenes from the infant’s point of view, along with the parent’s social input with respect to gaze, labeling, and actions of object handling. Fifty-one infants and toddlers (aged 6–18 months) from an English monolingual or a diverse bilingual household were recruited to observe the early multimodal learning experiences in an object play session. Despite that monolingual parents spoke more and labeled more frequently relative to bilingual parents, infants from both language groups benefit from a comparable amount of socially coordinated experiences where parents name the object while the object is looked at by the infant. Also, a sequential path analysis reveals multiple social coordinated pathways that facilitate infant object looking. Specifically, young children’s attention to the referent objects is directly influenced by parent’s object handling. These findings point to the new approach to early language input and how multimodal learning experiences are coordinated socially for young children growing up with monolingual and bilingual learning contexts.
Collapse
Affiliation(s)
- Lichao Sun
- Department of Psychology, University of Houston, Houston, TX, United States
| | - Christina D Griep
- Department of Psychology, University of Houston, Houston, TX, United States
| | - Hanako Yoshida
- Department of Psychology, University of Houston, Houston, TX, United States
| |
Collapse
|
10
|
The temporal dynamics of labelling shape infant object recognition. Infant Behav Dev 2022; 67:101698. [DOI: 10.1016/j.infbeh.2022.101698] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 12/06/2021] [Accepted: 01/26/2022] [Indexed: 11/22/2022]
|
11
|
Bastianello T, Keren-Portnoy T, Majorano M, Vihman M. Infant looking preferences towards dynamic faces: A systematic review. Infant Behav Dev 2022; 67:101709. [PMID: 35338995 DOI: 10.1016/j.infbeh.2022.101709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 02/28/2022] [Accepted: 03/06/2022] [Indexed: 11/25/2022]
Abstract
Although the pattern of visual attention towards the region of the eyes is now well-established for infants at an early stage of development, less is known about the extent to which the mouth attracts an infant's attention. Even less is known about the extent to which these specific looking behaviours towards different regions of the talking face (i.e., the eyes or the mouth) may impact on or account for aspects of language development. The aim of the present systematic review is to synthesize and analyse (i) which factors might determine different looking patterns in infants during audio-visual tasks using dynamic faces and (ii) how these patterns have been studied in relation to aspects of the baby's development. Four bibliographic databases were explored, and the records were selected following specified inclusion criteria. The search led to the identification of 19 papers (October 2021). Some studies have tried to clarify the role played by audio-visual support in speech perception and early production based on directly related factors such as the age or language background of the participants, while others have tested the child's competence in terms of linguistic or social skills. Several hypotheses have been advanced to explain the selective attention phenomenon. The results of the selected studies have led to different lines of interpretation. Some suggestions for future research are outlined.
Collapse
Affiliation(s)
| | | | | | - Marilyn Vihman
- Department of Language and Linguistic Science, University of York, UK
| |
Collapse
|
12
|
Object label and category knowledge among toddlers at risk for autism spectrum disorder: An application of the visual array task. Infant Behav Dev 2022; 67:101705. [PMID: 35338994 PMCID: PMC9197929 DOI: 10.1016/j.infbeh.2022.101705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 02/15/2022] [Accepted: 02/21/2022] [Indexed: 11/22/2022]
Abstract
Individuals diagnosed with autism spectrum disorder (ASD) demonstrate atypical development of receptive language and object category knowledge. Yet, little is known about the emerging relation between these two competencies in this population. The present study utilized a gaze-based paradigm, the visual array task (VAT), to examine the relation between object label and object category knowledge in a sample of toddlers at heightened genetic risk for developing ASD. Eighty-eight toddlers with at least one typically developing older sibling (low-risk; LR) or one older sibling diagnosed with ASD (high-risk; HR) completed the VAT at 17 (LR n = 20; HR n = 27) and/or 25 months of age (LR n = 42; HR n = 22). Results indicated that the VAT was both a sensitive measure of receptive vocabulary as well as capable of reflecting gains in category knowledge for toddlers at genetic risk of developing ASD. Notably, an early emerging difference in the relation between target label knowledge and category knowledge for the groups was observed at 17 months of age but dissipated by 25 months of age. This suggests that while the link between receptive vocabulary and category knowledge may develop earlier in LR groups, HR groups may potentially catch up by the second year of life. Therefore, it is likely meaningful to consider differences in category knowledge when conceptualizing the receptive language deficits associated with HR populations. During language learning, typically developing children are sensitive to the common features of category members and use this information to generalize known object labels to newly encountered exemplars. The inability to identify similarities between category members and/or utilize this information when learning new object referents at 17 months of age may be a potential mechanism underlying the delays observed in HR populations.
Collapse
|
13
|
Murray L, Rayson H, Ferrari PF, Wass SV, Cooper PJ. Dialogic Book-Sharing as a Privileged Intersubjective Space. Front Psychol 2022; 13:786991. [PMID: 35310233 PMCID: PMC8927819 DOI: 10.3389/fpsyg.2022.786991] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/17/2022] [Indexed: 12/02/2022] Open
Abstract
Parental reading to young children is well-established as being positively associated with child cognitive development, particularly their language development. Research indicates that a particular, "intersubjective," form of using books with children, "Dialogic Book-sharing" (DBS), is especially beneficial to infants and pre-school aged children, particularly when using picture books. The work on DBS to date has paid little attention to the theoretical and empirical underpinnings of the approach. Here, we address the question of what processes taking place during DBS confer benefits to child development, and why these processes are beneficial. In a novel integration of evidence, ranging from non-human primate communication through iconic gestures and pointing, archaeological data on Pre-hominid and early human art, to experimental and naturalistic studies of infant attention, cognitive processing, and language, we argue that DBS entails core characteristics that make it a privileged intersubjective space for the promotion of child cognitive and language development. This analysis, together with the findings of DBS intervention studies, provides a powerful intellectual basis for the wide-scale promotion of DBS, especially in disadvantaged populations.
Collapse
Affiliation(s)
- Lynne Murray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| | - Holly Rayson
- Institute des Sciences Cognitives Marc Jeannerod (CNRS), Bron, France
| | - Pier-Francesco Ferrari
- Institute des Sciences Cognitives Marc Jeannerod (CNRS), Bron, France
- Dipartimento di Neuroscienza, Universitá di Parma, Parma, Italy
| | - Sam V. Wass
- School of Psychology, University of East London, London, United Kingdom
| | - Peter J. Cooper
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| |
Collapse
|
14
|
Perkovich E, Sun L, Mire S, Laakman A, Sakhuja U, Yoshida H. What children with and without ASD see: Similar visual experiences with different pathways through parental attention strategies. AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2022; 7:23969415221137293. [PMID: 36518657 PMCID: PMC9742584 DOI: 10.1177/23969415221137293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Background and aims Although young children's gaze behaviors in experimental task contexts have been shown to be potential biobehavioral markers relevant to autism spectrum disorder (ASD), we know little about their everyday gaze behaviors. The present study aims (1) to document early gaze behaviors that occur within a live, social interactive context among children with and without ASD and their parents, and (2) to examine how children's and parents' gaze behaviors are related for ASD and typically developing (TD) groups. A head-mounted eye-tracking system was used to record the frequency and duration of a set of gaze behaviors (such as sustained attention [SA] and joint attention [JA]) that are relevant to early cognitive and language development. Methods Twenty-six parent-child dyads (ASD group = 13, TD group = 13) participated. Children were between the ages of 3 and 8 years old. We placed head-mounted eye trackers on parents and children to record their parent- and child-centered views, and we also recorded their interactive parent-child object play scene from both a wall- and ceiling-mounted camera. We then annotated the frequency and duration of gaze behaviors (saccades, fixation, SA, and JA) for different regions of interest (object, face, and hands), and attention shifting. Independent group t-tests and ANOVAs were used to observe group comparisons, and linear regression was used to test the predictiveness of parent gaze behaviors for JA. Results The present study found no differences in visual experiences between children with and without ASD. Interestingly, however, significant group differences were found for parent gaze behaviors. Compared to parents of ASD children, parents of TD children focused on objects and shifted their attention between objects and their children's faces more. In contrast, parents of ASD children were more likely to shift their attention between their own hands and their children. JA experiences were also predicted differently, depending on the group: among parents of TD children, attention to objects predicted JA, but among parents of ASD children, attention to their children predicted JA. Conclusion Although no differences were found between gaze behaviors of autistic and TD children in this study, there were significant group differences in parents' looking behaviors. This suggests potentially differential pathways for the scaffolding effect of parental gaze for ASD children compared with TD children. Implications The present study revealed the impact of everyday life, social interactive context on early visual experiences, and point to potentially different pathways by which parental looking behaviors guide the looking behaviors of children with and without ASD. Identifying parental social input relevant to early attention development (e.g., JA) among autistic children has implications for mechanisms that could support socially mediated attention behaviors that have been documented to facilitate early cognitive and language development and implications for the development of parent-mediated interventions for young children with or at risk for ASD.Note: This paper uses a combination of person-first and identity-first language, an intentional decision aligning with comments put forth by Vivanti (Vivanti, 2020), recognizing the complexities of known and unknown preferences of those in the larger autism community.
Collapse
Affiliation(s)
| | - Lichao Sun
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sarah Mire
- Educational Psychology Department, Baylor University, Waco, TX, USA
| | - Anna Laakman
- Department of Psychological Health and Learning Sciences, University of Houston, Houston, TX, USA
| | - Urvi Sakhuja
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Hanako Yoshida
- Department of Psychology, University of Houston, Houston, TX, USA
| |
Collapse
|
15
|
Long BL, Sanchez A, Kraus AM, Agrawal K, Frank MC. Automated detections reveal the social information in the changing infant view. Child Dev 2021; 93:101-116. [PMID: 34787894 DOI: 10.1111/cdev.13648] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
How do postural developments affect infants' access to social information? We recorded egocentric and third-person video while infants and their caregivers (N = 36, 8- to 16-month-olds, N = 19 females) participated in naturalistic play sessions. We then validated the use of a neural network pose detection model to detect faces and hands in the infant view. We used this automated method to analyze our data and a prior egocentric video dataset (N = 17, 12-month-olds). Infants' average posture and orientation with respect to their caregiver changed dramatically across this age range; both posture and orientation modulated access to social information. Together, these results confirm that infant's ability to move and act on the world plays a significant role in shaping the social information in their view.
Collapse
Affiliation(s)
- Bria L Long
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Alessandro Sanchez
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Allison M Kraus
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Ketan Agrawal
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, California, USA
| |
Collapse
|
16
|
Chen CH, Houston DM, Yu C. Parent-Child Joint Behaviors in Novel Object Play Create High-Quality Data for Word Learning. Child Dev 2021; 92:1889-1905. [PMID: 34463350 DOI: 10.1111/cdev.13620] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
This research takes a dyadic approach to study early word learning and focuses on toddlers' (N = 20, age: 17-23 months) information seeking and parents' information providing behaviors and the ways the two are coupled in real-time parent-child interactions. Using head-mounted eye tracking, this study provides the first detailed comparison of children's and their parents' behavioral and attentional patterns in two free-play contexts: one with novel objects with to-be-learned names (Learning condition) and the other with familiar objects with known names (Play condition). Children and parents in the Learning condition modified their individual and joint behaviors when encountering novel objects with to-be-learned names, which created clearer signals that reduced referential ambiguity and potentially facilitated word learning.
Collapse
Affiliation(s)
| | | | - Chen Yu
- The University of Texas at Austin
| |
Collapse
|
17
|
Akama H, Yuan Y, Awazu S. Task-induced brain functional connectivity as a representation of schema for mediating unsupervised and supervised learning dynamics in language acquisition. Brain Behav 2021; 11:e02157. [PMID: 33951344 PMCID: PMC8213930 DOI: 10.1002/brb3.2157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 03/12/2021] [Accepted: 04/02/2021] [Indexed: 11/08/2022] Open
Abstract
INTRODUCTION Based on the schema theory advanced by Rumelhart and Norman, we shed light on the individual variability in brain dynamics induced by hybridization of learning methodologies, particularly alternating unsupervised learning and supervised learning in language acquisition. The concept of "schema" implies a latent knowledge structure that a learner holds and updates as intrinsic to his or her cognitive space for guiding the processing of newly arriving information. METHODS We replicated the cognitive experiment of Onnis and Thiessen on implicit statistical learning ability in language acquisition but included additional factors of prosodic variables and explicit supervised learning. Functional magnetic resonance imaging was performed to identify the functional network connections for schema updating by alternately using unsupervised and supervised artificial grammar learning tasks to segment potential words. RESULTS Regardless of the quality of task performance, the default mode network represented the first stage of spontaneous unsupervised learning, and the wrap-up accomplishment for successful subjects of the whole hybrid learning in concurrence with the task-related auditory language networks. Furthermore, subjects who could easily "tune" the schema for recording a high task precision rate resorted even at an early stage to a self-supervised learning, or "superlearning," as a set of different learning mechanisms that act in synergy to trigger widespread neuro-transformation with a focus on the cerebellum. CONCLUSIONS Investigation of the brain dynamics revealed by functional connectivity imaging analysis was able to differentiate the synchronized neural responses with respect to learning methods and the order effect that affects hybrid learning.
Collapse
Affiliation(s)
- Hiroyuki Akama
- Institute of Liberal Arts/Department of Life Science and Technology, Tokyo Institute of Technology, Tokyo, Japan
| | - Yixin Yuan
- Marcus Autism Center, Children's Healthcare of Atlanta, Atlanta, GA, USA.,Division of Autism & Related Disabilities, Department of Pediatrics, Emory University School of Medicine, Atlanta, GA, USA
| | - Shunji Awazu
- Faculty of Humanities and Social Sciences, Jissen Women's University, Tokyo, Japan
| |
Collapse
|
18
|
Çetinçelik M, Rowland CF, Snijders TM. Do the Eyes Have It? A Systematic Review on the Role of Eye Gaze in Infant Language Development. Front Psychol 2021; 11:589096. [PMID: 33584424 PMCID: PMC7874056 DOI: 10.3389/fpsyg.2020.589096] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 11/25/2020] [Indexed: 11/13/2022] Open
Abstract
Eye gaze is a ubiquitous cue in child–caregiver interactions, and infants are highly attentive to eye gaze from very early on. However, the question of why infants show gaze-sensitive behavior, and what role this sensitivity to gaze plays in their language development, is not yet well-understood. To gain a better understanding of the role of eye gaze in infants' language learning, we conducted a broad systematic review of the developmental literature for all studies that investigate the role of eye gaze in infants' language development. Across 77 peer-reviewed articles containing data from typically developing human infants (0–24 months) in the domain of language development, we identified two broad themes. The first tracked the effect of eye gaze on four developmental domains: (1) vocabulary development, (2) word–object mapping, (3) object processing, and (4) speech processing. Overall, there is considerable evidence that infants learn more about objects and are more likely to form word–object mappings in the presence of eye gaze cues, both of which are necessary for learning words. In addition, there is good evidence for longitudinal relationships between infants' gaze following abilities and later receptive and expressive vocabulary. However, many domains (e.g., speech processing) are understudied; further work is needed to decide whether gaze effects are specific to tasks, such as word–object mapping or whether they reflect a general learning enhancement mechanism. The second theme explored the reasons why eye gaze might be facilitative for learning, addressing the question of whether eye gaze is treated by infants as a specialized socio-cognitive cue. We concluded that the balance of evidence supports the idea that eye gaze facilitates infants' learning by enhancing their arousal, memory, and attentional capacities to a greater extent than other low-level attentional cues. However, as yet, there are too few studies that directly compare the effect of eye gaze cues and non-social, attentional cues for strong conclusions to be drawn. We also suggest that there might be a developmental effect, with eye gaze, over the course of the first 2 years of life, developing into a truly ostensive cue that enhances language learning across the board.
Collapse
Affiliation(s)
- Melis Çetinçelik
- Language Development Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Caroline F Rowland
- Language Development Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Tineke M Snijders
- Language Development Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
19
|
Zhuang C, Yan S, Nayebi A, Schrimpf M, Frank MC, DiCarlo JJ, Yamins DLK. Unsupervised neural network models of the ventral visual stream. Proc Natl Acad Sci U S A 2021; 118:e2014196118. [PMID: 33431673 PMCID: PMC7826371 DOI: 10.1073/pnas.2014196118] [Citation(s) in RCA: 105] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today's best supervised methods and that the mapping of these neural network models' hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
Collapse
Affiliation(s)
- Chengxu Zhuang
- Department of Psychology, Stanford University, Stanford, CA 94305;
| | - Siming Yan
- Department of Computer Science, The University of Texas at Austin, Austin, TX 78712
| | - Aran Nayebi
- Neurosciences PhD Program, Stanford University, Stanford, CA 94305
| | - Martin Schrimpf
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA 94305
| | - James J DiCarlo
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Daniel L K Yamins
- Department of Psychology, Stanford University, Stanford, CA 94305
- Department of Computer Science, Stanford University, Stanford, CA 94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA 94305
| |
Collapse
|
20
|
Abstract
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other's mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world.
Collapse
Affiliation(s)
- Lucas Battich
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany.
- Graduate School of Systemic Neurosciences, Ludwig Maximilian University Munich, Munich, Germany.
| | - Merle Fairhurst
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institut für Psychologie, Fakultät für Humanwissenschaften, Universität der Bundeswehr München, Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, London, UK
| |
Collapse
|
21
|
Kawai Y, Oshima Y, Sasamoto Y, Nagai Y, Asada M. A Computational Model for Child Inferences of Word Meanings via Syntactic Categories for Different Ages and Languages. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2018.2883048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
22
|
Chen CH, Castellanos I, Yu C, Houston DM. Parental Linguistic Input and Its Relation to Toddlers' Visual Attention in Joint Object Play: A Comparison Between Children with Normal Hearing and Children With Hearing Loss. INFANCY 2020; 24:589-612. [PMID: 32677253 DOI: 10.1111/infa.12291] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Revised: 03/04/2019] [Accepted: 03/08/2019] [Indexed: 11/30/2022]
Abstract
Parent-child interactions are multimodal, often involving coordinated exchanges of visual and auditory information between the two partners. The current work focuses on the effect of children's hearing loss on parent-child interactions when parents and their toddlers jointly played with a set of toy objects. We compared the linguistic input received by toddlers with hearing loss (HL) and their chronological age-matched (CA) and hearing age-matched (HA) normal-hearing peers. Moreover, we used head-mounted eye trackers to examine how different parental linguistic input affected children's visual attention on objects when parents either led or followed children's attention during joint object play. Overall, parents of children with HL provided comparable amount of linguistic input as parents of the two normal-hearing groups. However, the types of linguistic input produced by parents of children with HL were similar to the CA group in some ways and similar to the HA group in other ways. Interestingly, the effects of different types of linguistic input on extending the attention of children with HL qualitatively resembled the patterns seen in the CA group, even though the effects were less pronounced in the HL group. We discuss the implications of these results for our understanding of the reciprocal, dynamic, and multi-factored nature of parent-child interactions.
Collapse
Affiliation(s)
- Chi-Hsin Chen
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University
| | - Irina Castellanos
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University.,Nationwide Children's Hospital
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University
| | - Derek M Houston
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University.,Nationwide Children's Hospital
| |
Collapse
|
23
|
Chen CH, Castellanos I, Yu C, Houston DM. What leads to coordinated attention in parent-toddler interactions? Children's hearing status matters. Dev Sci 2020; 23:e12919. [PMID: 31680414 PMCID: PMC7160036 DOI: 10.1111/desc.12919] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/25/2019] [Accepted: 10/28/2019] [Indexed: 11/30/2022]
Abstract
Coordinated attention between children and their parents plays an important role in their social, language, and cognitive development. The current study used head-mounted eye-trackers to investigate the effects of children's prelingual hearing loss on how they achieve coordinated attention with their hearing parents during free-flowing object play. We found that toddlers with hearing loss (age: 24-37 months) had similar overall gaze patterns (e.g., gaze length and proportion of face looking) as their normal-hearing peers. In addition, children's hearing status did not affect how likely parents and children attended to the same object at the same time during play. However, when following parents' attention, children with hearing loss used both parents' gaze directions and hand actions as cues, whereas children with normal hearing mainly relied on parents' hand actions. The diversity of pathways leading to coordinated attention suggests the flexibility and robustness of developing systems in using multiple pathways to achieve the same functional end.
Collapse
Affiliation(s)
- Chi-hsin Chen
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
| | - Irina Castellanos
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
- Nationwide Children’s Hospital, 700 Children’s Dr, Columbus, Ohio 43205
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10 Street, Bloomington, Indiana 47405
| | - Derek M. Houston
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
- Nationwide Children’s Hospital, 700 Children’s Dr, Columbus, Ohio 43205
| |
Collapse
|
24
|
Gogate L. Maternal object naming is less adapted to preterm infants' than to term infants' word mapping. J Child Psychol Psychiatry 2020; 61:447-458. [PMID: 31710089 DOI: 10.1111/jcpp.13128] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/19/2019] [Indexed: 11/30/2022]
Abstract
BACKGROUND Term infants learn word-object relations in their first year during multisensory interactions with caregivers. Although preterm infants often experience language delays, little is known about how caregivers contribute to their early word-object learning. The present longitudinal study compared maternal naming and word learning in these infant groups. METHODS Forty moderately preterm and 40 term infants participated at 6-9 and 12 months with their mothers. At each visit, mothers named two novel objects during play, and infants' learning was assessed using dynamic displays of the familiar and novel (mismatched) word-object relations. Infants' general cognitive, language, and motoric abilities were evaluated. Maternal multisensory naming was coded for synchrony between the target words and object motions and other naming styles. RESULTS During play, although overall maternal naming-style was similar across infant groups within visits, naming frequency increased to term but not preterm infants, from visit 1 to 2. On the test at visit 1, although the term infants' looked equally to novel and familiar word-object relations, their looking to the novel relations correlated positively with maternal synchrony use but inversely with naming frequency. At visit 2, term infants looked longer to the novel relations. In contrast, preterm infants showed no looking preference at either visit. Neither was their word-object learning correlated with maternal naming. Their cognition, language, and motor scores were attenuated when compared to term infants on the Bayley-III but not their MCDI vocabulary. CONCLUSIONS Less adaptive maternal naming and delayed word mapping in moderately preterm infants underscore a critical need for multisensory language intervention prior to first-word onset to alleviate its cascading effects on later language.
Collapse
Affiliation(s)
- Lakshmi Gogate
- Department of Speech, Language and Hearing Sciences, University of Missouri, Columbia, MO, USA
| |
Collapse
|
25
|
van Schaik JE, Meyer M, van Ham CR, Hunnius S. Motion tracking of parents' infant- versus adult-directed actions reveals general and action-specific modulations. Dev Sci 2020; 23:e12869. [PMID: 31132212 PMCID: PMC6916206 DOI: 10.1111/desc.12869] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 05/16/2019] [Accepted: 05/21/2019] [Indexed: 12/03/2022]
Abstract
Parents tend to modulate their movements when demonstrating actions to their infants. Thus far, these modulations have primarily been quantified by human raters and for entire interactions, thereby possibly overlooking the intricacy of such demonstrations. Using optical motion tracking, the precise modulations of parents' infant-directed actions were quantified and compared to adult-directed actions and between action types. Parents demonstrated four novel objects to their 14-month-old infants and adult confederates. Each object required a specific action to produce a unique effect (e.g. rattling). Parents were asked to demonstrate an object at least once before passing it to their demonstration partner, and they were subsequently free to exchange the object as often as desired. Infants' success at producing the objects' action-effects was coded during the demonstration session and their memory of the action-effects was tested after a several-minute delay. Indicating general modulations across actions, parents repeated demonstrations more often, performed the actions in closer proximity and demonstrated action-effects for longer when interacting with their infant compared to the adults. Meanwhile, modulations of movement size and velocity were specific to certain action-effect pairs. Furthermore, a 'just right' modulation of proximity was detected, since infants' learning, memory, and parents' prior evaluations of their infants' motor abilities, were related to demonstrations that were performed neither too far from nor too close to the infants. Together, these findings indicate that infant-directed action modulations are not solely overall exaggerations but are dependent upon the characteristics of the to-be learned actions, their effects, and the infant learners.
Collapse
Affiliation(s)
- Johanna E. van Schaik
- Donders Institute for Brain Cognition and BehaviorRadboud University NijmegenNijmegenThe Netherlands
- Institute of Education and Child StudiesLeiden UniversityLeidenThe Netherlands
| | - Marlene Meyer
- Donders Institute for Brain Cognition and BehaviorRadboud University NijmegenNijmegenThe Netherlands
- Department of PsychologyUniversity of ChicagoChicagoIllinois
| | - Camila R. van Ham
- Donders Institute for Brain Cognition and BehaviorRadboud University NijmegenNijmegenThe Netherlands
| | - Sabine Hunnius
- Donders Institute for Brain Cognition and BehaviorRadboud University NijmegenNijmegenThe Netherlands
| |
Collapse
|
26
|
Vivona JM. The Interpersonal Words of the Infant: Implications of Current Infant Language Research for Psychoanalytic Theories of Infant Development, Language, and Therapeutic Action. THE PSYCHOANALYTIC QUARTERLY 2019. [DOI: 10.1080/00332828.2019.1652048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
27
|
George NR, Bulgarelli F, Roe M, Weiss DJ. Stacking the evidence: Parents' use of acoustic packaging with preschoolers. Cognition 2019; 191:103956. [PMID: 31276946 PMCID: PMC6814401 DOI: 10.1016/j.cognition.2019.04.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 04/23/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
Abstract
Segmenting continuous events into discrete actions is critical for understanding the world. As infants may lack top-down knowledge of event structure, caregivers provide audiovisual cues to guide the process, aligning action descriptions with event boundaries to increase their salience. This acoustic packaging may be specific to infant-directed speech, but little is known about when and why the use of this cue wanes. We explore whether acoustic packaging persists in parents' teaching of 2.5-5.5-year-old children about various toys. Parents produced a smaller percentage of action speech relative to studies with infants. However, action speech largely remained more aligned to action boundaries relative to non-action speech. Further, for the more challenging novel toys, parents modulated their use of acoustic packaging, providing it more for those children with lower vocabularies. Our findings suggest that acoustic packaging persists beyond interactions with infants, underscoring the utility of multimodal cues for learning, particularly for less knowledgeable learners in challenging learning environments.
Collapse
Affiliation(s)
| | - Federica Bulgarelli
- Duke University, United States; Pennsylvania State University, United States
| | - Mary Roe
- Pennsylvania State University, United States
| | | |
Collapse
|
28
|
Eiteljoerge SFV, Adam M, Elsner B, Mani N. Consistency of co-occurring actions influences young children's word learning. ROYAL SOCIETY OPEN SCIENCE 2019; 6:190097. [PMID: 31598229 PMCID: PMC6731739 DOI: 10.1098/rsos.190097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Accepted: 07/02/2019] [Indexed: 06/10/2023]
Abstract
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children's word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children's learning of novel word-object associations. Children (18 months, 30 months and 36-48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word-object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child's perceptual abilities to shape the learning experience.
Collapse
Affiliation(s)
- Sarah F. V. Eiteljoerge
- Department for Psychology of Language, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus ‘Primate Cognition’, Goettingen, Germany
| | - Maurits Adam
- Developmental Psychology, Department of Psychology, University of Potsdam, Potsdam, Germany
| | - Birgit Elsner
- Developmental Psychology, Department of Psychology, University of Potsdam, Potsdam, Germany
| | - Nivedita Mani
- Department for Psychology of Language, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus ‘Primate Cognition’, Goettingen, Germany
| |
Collapse
|
29
|
Genty E. Vocal–gestural combinations in infant bonobos: new insights into signal functional specificity. Anim Cogn 2019; 22:505-518. [DOI: 10.1007/s10071-019-01267-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 04/30/2019] [Accepted: 05/06/2019] [Indexed: 02/03/2023]
|
30
|
Mason GM, Goldstein MH, Schwade JA. The role of multisensory development in early language learning. J Exp Child Psychol 2019; 183:48-64. [PMID: 30856417 DOI: 10.1016/j.jecp.2018.12.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2018] [Revised: 12/14/2018] [Accepted: 12/15/2018] [Indexed: 01/11/2023]
Abstract
In typical development, communicative skills such as language emerge from infants' ability to combine multisensory information into cohesive percepts. For example, the act of associating the visual or tactile experience of an object with its spoken name is commonly used as a measure of early word learning, and social attention and speech perception frequently involve integrating both visual and auditory attributes. Early perspectives once regarded perceptual integration as one of infants' primary challenges, whereas recent work suggests that caregivers' social responses contain structured patterns that may facilitate infants' perception of multisensory social cues. In the current review, we discuss the regularities within caregiver feedback that may allow infants to more easily discriminate and learn from social signals. We focus on the statistical regularities that emerge in the moment-by-moment behaviors observed in studies of naturalistic caregiver-infant play. We propose that the spatial form and contingencies of caregivers' responses to infants' looks and prelinguistic vocalizations facilitate communicative and cognitive development. We also explore how individual differences in infants' sensory and motor abilities may reciprocally influence caregivers' response patterns, in turn regulating and constraining the types of social learning opportunities that infants experience across early development. We end by discussing implications for neurodevelopmental conditions affecting both multisensory integration and communication (i.e., autism) and suggest avenues for further research and intervention.
Collapse
Affiliation(s)
- Gina M Mason
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | | | |
Collapse
|
31
|
Jo J, Ko ES. Korean Mothers Attune the Frequency and Acoustic Saliency of Sound Symbolic Words to the Linguistic Maturity of Their Children. Front Psychol 2018; 9:2225. [PMID: 30618893 PMCID: PMC6305434 DOI: 10.3389/fpsyg.2018.02225] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2018] [Accepted: 10/26/2018] [Indexed: 11/25/2022] Open
Abstract
The present study investigates Korean mothers' use of sound symbolism, in particular expressive lengthening and ideophones, in their speech directed to their children. Specifically, we explore whether the frequency and acoustic saliency of sound symbolic words are modulated by the maturity of children's linguistic ability. A total of 36 infant-mother dyads, 12 each belonging to the three groups of preverbal (M = 8-month-old), early speech (M = 13-month-old), and multiword (M = 27-month-old) stage, were recorded in a 40-min free-play session. The results were consistent with the findings in previous research that the ratio of sound symbolic words in mothers' speech decreases with child age and that they are acoustically more salient than conventional words in duration and pitch measures. We additionally found that mothers weaken the prominence for ideophones for older children in mean pitch, suggesting that such prominence of these iconic words might bootstrap infants' word learning especially when they are younger. Interestingly, however, we found that mothers maintain the acoustic saliency of expressive lengthening consistently across children's ages in all acoustic measures. There is some indication that children at age 2 are not likely to have mastered the fine details of scalar properties in certain words. Thus, it could be that they still benefit from the enhanced prosody of expressive lengthening in learning the semantic attributes of scalar adjectives, and, accordingly, mothers continue to provide redundant acoustic cues longer for expressive lengthening than ideophones.
Collapse
Affiliation(s)
- Jinyoung Jo
- Department of English Language and Literature, Seoul National University, Seoul, South Korea
| | - Eon-Suk Ko
- Department of English Language and Literature, Chosun University, Gwangju, South Korea
| |
Collapse
|
32
|
Mason GM, Kirkpatrick F, Schwade JA, Goldstein MH. The Role of Dyadic Coordination in Organizing Visual Attention in 5‐Month‐Old Infants. INFANCY 2018; 24:162-186. [PMID: 32677200 DOI: 10.1111/infa.12255] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2017] [Revised: 06/21/2018] [Accepted: 06/25/2018] [Indexed: 10/28/2022]
|
33
|
Tanaka Y, Kanakogi Y, Kawasaki M, Myowa M. The integration of audio-tactile information is modulated by multimodal social interaction with physical contact in infancy. Dev Cogn Neurosci 2018; 30:31-40. [PMID: 29253738 PMCID: PMC6969118 DOI: 10.1016/j.dcn.2017.12.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Revised: 10/04/2017] [Accepted: 12/04/2017] [Indexed: 11/19/2022] Open
Abstract
Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio - tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal 'A-T' condition), and not being tickled (unimodal 'A' condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants' brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy.
Collapse
Affiliation(s)
- Yukari Tanaka
- Graduate school of Education, Kyoto University, Kyoto, Japan.
| | - Yasuhiro Kanakogi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 2-4 Hikaridai, Seika-cho, Souraku-gun, Kyoto 619-0237, Japan; Japan Society for Promotion Science, Kojimachi Business Center Building, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo 102-0083, Japan
| | - Masahiro Kawasaki
- Rhythm-based Brain Information Processing Unit, RIKEN BSI-TOYOTA Collaboration Center, Saitama, Japan; Department of Intelligent Interaction Technology, Graduate School of Systems and Information Engineering, University of Tsukuba, Ibaraki, Japan
| | - Masako Myowa
- Graduate school of Education, Kyoto University, Kyoto, Japan
| |
Collapse
|
34
|
Gogate L, Maganti M. The Origins of Verb Learning: Preverbal and Postverbal Infants' Learning of Word-Action Relations. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3538-3550. [PMID: 29143061 DOI: 10.1044/2017_jslhr-l-17-0085] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2017] [Accepted: 08/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This experiment examined English- or Spanish-learning preverbal (8-9 months, n = 32) and postverbal (12-14 months, n = 40) infants' learning of word-action pairings prior to and after the transition to verb comprehension and its relation to naturally learned vocabulary. METHOD Infants of both verbal levels were first habituated to 2 dynamic video displays of novel word-action pairings, the words /wem/ or /bæf/, spoken synchronously with an adult shaking or looming an object, and tested with interchanged (switched) versus same word-action pairings. Mothers of the postverbal infants were asked to report on their infants' vocabulary on the MacArthur-Bates Communicative Development Inventories (Fenson et al., 1994). RESULTS The preverbal infants looked longer to the switched relative to same pairings, suggesting word-action mapping, but not the postverbal infants. Mothers of the postverbal infants reported a noun bias on the MacArthur-Bates Communicative Development Inventories; infants learned more nouns than verbs in the natural environment. Further analyses revealed marginal word-action mapping in postverbal infants who learned fewer nouns and only comprehended verbs (post-verb comprehension), but not in those who learned more nouns and also produced verbs (post-verb production). CONCLUSIONS These findings on verb learning from inside and outside the laboratory suggest a developmental shift from domain-general to language-specific mechanisms. Long before they talk, infants learning a noun-dominant language learn synchronous word-action relations. As a postverbal language-specific noun bias develops, this learning temporarily diminishes. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5592637.
Collapse
|
35
|
Deák GO, Krasno AM, Jasso H, Triesch J. What Leads To Shared Attention? Maternal Cues and Infant Responses During Object Play. INFANCY 2017. [DOI: 10.1111/infa.12204] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Gedeon O. Deák
- Department of Cognitive Science University of California at San Diego
| | - Anna M. Krasno
- Department of Cognitive Science University of California at San Diego
| | - Hector Jasso
- Department of Computer Science and Engineering University of California at San Diego
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies Goethe University at Frankfurt
| |
Collapse
|
36
|
Hakuno Y, Omori T, Yamamoto JI, Minagawa Y. Social interaction facilitates word learning in preverbal infants: Word–object mapping and word segmentation. Infant Behav Dev 2017; 48:65-77. [DOI: 10.1016/j.infbeh.2017.05.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 05/26/2017] [Accepted: 05/26/2017] [Indexed: 11/30/2022]
|
37
|
Hobaiter C, Byrne RW, Zuberbühler K. Wild chimpanzees' use of single and combined vocal and gestural signals. Behav Ecol Sociobiol 2017; 71:96. [PMID: 28596637 PMCID: PMC5446553 DOI: 10.1007/s00265-017-2325-1] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Revised: 05/08/2017] [Accepted: 05/10/2017] [Indexed: 11/30/2022]
Abstract
ABSTRACT We describe the individual and combined use of vocalizations and gestures in wild chimpanzees. The rate of gesturing peaked in infancy and, with the exception of the alpha male, decreased again in older age groups, while vocal signals showed the opposite pattern. Although gesture-vocal combinations were relatively rare, they were consistently found in all age groups, especially during affiliative and agonistic interactions. Within behavioural contexts rank (excluding alpha-rank) had no effect on the rate of male chimpanzees' use of vocal or gestural signals and only a small effect on their use of combination signals. The alpha male was an outlier, however, both as a prolific user of gestures and recipient of high levels of vocal and gesture-vocal signals. Persistence in signal use varied with signal type: chimpanzees persisted in use of gestures and gesture-vocal combinations after failure, but where their vocal signals failed they tended to add gestural signals to produce gesture-vocal combinations. Overall, chimpanzees employed signals with a sensitivity to the public/private nature of information, by adjusting their use of signal types according to social context and by taking into account potential out-of-sight audiences. We discuss these findings in relation to the various socio-ecological challenges that chimpanzees are exposed to in their natural forest habitats and the current discussion of multimodal communication in great apes. SIGNIFICANCE STATEMENT All animal communication combines different types of signals, including vocalizations, facial expressions, and gestures. However, the study of primate communication has typically focused on the use of signal types in isolation. As a result, we know little on how primates use the full repertoire of signals available to them. Here we present a systematic study on the individual and combined use of gestures and vocalizations in wild chimpanzees. We find that gesturing peaks in infancy and decreases in older age, while vocal signals show the opposite distribution, and patterns of persistence after failure suggest that gestural and vocal signals may encode different types of information. Overall, chimpanzees employed signals with a sensitivity to the public/private nature of information, by adjusting their use of signal types according to social context and by taking into account potential out-of-sight audiences.
Collapse
Affiliation(s)
- C Hobaiter
- School of Psychology and Neuroscience, University of St Andrews, St Marys College, South Street, St Andrews, KY16 9JP Scotland
- Budongo Conservation Field Station, Masindi, Uganda
| | - R W Byrne
- School of Psychology and Neuroscience, University of St Andrews, St Marys College, South Street, St Andrews, KY16 9JP Scotland
| | - K Zuberbühler
- School of Psychology and Neuroscience, University of St Andrews, St Marys College, South Street, St Andrews, KY16 9JP Scotland
- Budongo Conservation Field Station, Masindi, Uganda
- Department of Comparative Cognition, University of Neuchatel, Neuchâtel, Switzerland
| |
Collapse
|
38
|
Lund E, Schuele CM. Word-learning performance of children with and without cochlear implants given synchronous and asynchronous cues. CLINICAL LINGUISTICS & PHONETICS 2017; 31:777-790. [PMID: 28521543 DOI: 10.1080/02699206.2017.1320587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Accepted: 04/14/2017] [Indexed: 06/07/2023]
Abstract
This study sought to evaluate the effects of synchronous and asynchronous auditory-visual cues on the word-learning performance of children with cochlear implants and children with normal hearing matched for chronological age. Children with cochlear implants (n = 9) who had worn the implant for less than one year and children matched for chronological age (n = 9) participated in rapid word-learning trials. Children with cochlear implants did not learn words in either the synchronous or asynchronous condition (U = 49.5, p = .99; d = 0.05). Children with normal hearing learned more words in the synchronous rather than asynchronous condition (U = 78.5, p = .04; d = 0.95). These findings represent a first step toward determining how task-level factors influence the lexical outcomes of children with cochlear implants.
Collapse
Affiliation(s)
- Emily Lund
- a Department of Communication Sciences and Disorders , Texas Christian University , Fort Worth , TX , USA
| | - C Melanie Schuele
- b Department of Hearing and Speech Sciences , Vanderbilt University , Nashville , TN , USA
| |
Collapse
|
39
|
Gogate L. Development of Early Multisensory Perception and Communication: From Environmental and Behavioral to Neural Signatures. Dev Neuropsychol 2017; 41:269-272. [PMID: 28253037 DOI: 10.1080/87565641.2017.1279429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Lakshmi Gogate
- a Department of Communication Sciences and Disorders University of Missouri , Columbia , Missouri
| |
Collapse
|
40
|
Chang L, de Barbaro K, Deák G. Contingencies Between Infants’ Gaze, Vocal, and Manual Actions and Mothers’ Object-Naming: Longitudinal Changes From 4 to 9 Months. Dev Neuropsychol 2017; 41:342-361. [DOI: 10.1080/87565641.2016.1274313] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Lucas Chang
- Department of Cognitive Science, University of California San Diego, San Diego, California
| | - Kaya de Barbaro
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia
| | - Gedeon Deák
- Department of Cognitive Science, University of California San Diego, San Diego, California
| |
Collapse
|
41
|
Yurovsky D, Frank MC. Beyond naïve cue combination: salience and social cues in early word learning. Dev Sci 2017; 20:10.1111/desc.12349. [PMID: 26575408 PMCID: PMC4870162 DOI: 10.1111/desc.12349] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Accepted: 06/17/2015] [Indexed: 11/29/2022]
Abstract
Children learn their earliest words through social interaction, but it is unknown how much they rely on social information. Some theories argue that word learning is fundamentally social from its outset, with even the youngest infants understanding intentions and using them to infer a social partner's target of reference. In contrast, other theories argue that early word learning is largely a perceptual process in which young children map words onto salient objects. One way of unifying these accounts is to model word learning as weighted cue combination, in which children attend to many potential cues to reference, but only gradually learn the correct weight to assign each cue. We tested four predictions of this kind of naïve cue combination account, using an eye-tracking paradigm that combines social word teaching and two-alternative forced-choice testing. None of the predictions were supported. We thus propose an alternative unifying account: children are sensitive to social information early, but their ability to gather and deploy this information is constrained by domain-general cognitive processes. Developmental changes in children's use of social cues emerge not from learning the predictive power of social cues, but from the gradual development of attention, memory, and speed of information processing.
Collapse
|
42
|
Gogate L, Hollich G. Early Verb-Action and Noun-Object Mapping Across Sensory Modalities: A Neuro-Developmental View. Dev Neuropsychol 2017; 41:293-307. [PMID: 28059566 DOI: 10.1080/87565641.2016.1243112] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The authors provide an alternative to the traditional view that verbs are harder to learn than nouns by reviewing three lines of behavioral and neurophysiological evidence in word-mapping development across cultures. First, preverbal infants tune into word-action and word-object pairings using domain-general mechanisms. Second, while post-verbal infants from noun-friendly language environments experience verb-action mapping difficulty, infants from verb-friendly language environments do not. Third, children use language-specific conventions to learn all types of words, although still strongly influenced by their language environment. Additionally, the authors suggest neurophysiological research to advance these lines of evidence beyond traditional views of word learning.
Collapse
Affiliation(s)
- Lakshmi Gogate
- a Communication Sciences and Disorders , University of Missouri-Columbia , Columbia , Missouri
| | - George Hollich
- b Psychological Sciences , Purdue University , West Lafayette , Indiana
| |
Collapse
|
43
|
Patten E, Labban JD, Casenhiser DM, Cotton CL. Synchrony Detection of Linguistic Stimuli in the Presence of Faces: Neuropsychological Implications for Language Development in ASD. Dev Neuropsychol 2017; 41:362-374. [PMID: 28059555 DOI: 10.1080/87565641.2016.1243113] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Children with autism spectrum disorders (ASD) may be impaired in their ability to detect audiovisual synchrony and their ability may be influenced by the nature of the stimuli. We investigated the possibility that synchrony detection is disrupted by the presence of human faces by testing children with ASD using a preferential looking language-based paradigm. Children with low language abilities were significantly worse at detecting synchrony when the stimuli include an unobscured face than when the face was obscured. Findings suggest that the presence of faces may make multisensory processing more difficult. Implications for interventions are discussed, particularly those targeting attention to faces.
Collapse
Affiliation(s)
- Elena Patten
- a Department of Audiology and Speech Pathology , The University of Tennessee Health Science Center , Knoxville , Tennessee
| | - Jeffrey D Labban
- b Department of Kinesiology , University of North Carolina at Greensboro , Greensboro , North Carolina
| | - Devin M Casenhiser
- a Department of Audiology and Speech Pathology , The University of Tennessee Health Science Center , Knoxville , Tennessee
| | - Catherine L Cotton
- c Department of Communication Sciences & Disorders , University of North Carolina at Greensboro , Greensboro , North Carolina
| |
Collapse
|
44
|
Canfield CF, Saudino KJ. The influence of infant characteristics and attention to social cues on early vocabulary. J Exp Child Psychol 2016; 150:112-129. [DOI: 10.1016/j.jecp.2016.05.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 05/01/2016] [Accepted: 05/08/2016] [Indexed: 10/21/2022]
|
45
|
Suanda SH, Smith LB, Yu C. The Multisensory Nature of Verbal Discourse in Parent-Toddler Interactions. Dev Neuropsychol 2016; 41:324-341. [PMID: 28128992 PMCID: PMC7263485 DOI: 10.1080/87565641.2016.1256403] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Toddlers learn object names in sensory rich contexts. Many argue that this multisensory experience facilitates learning. Here, we examine how toddlers' multisensory experience is linked to another aspect of their experience associated with better learning: the temporally extended nature of verbal discourse. We observed parent-toddler dyads as they played with, and as parents talked about, a set of objects. Analyses revealed links between the multisensory and extended nature of speech, highlighting inter-connections and redundancies in the environment. We discuss the implications of these results for our understanding of early discourse, multisensory communication, and how the learning environment shapes language development.
Collapse
Affiliation(s)
- Sumarga H Suanda
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| | - Linda B Smith
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| | - Chen Yu
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| |
Collapse
|
46
|
Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds. J Exp Child Psychol 2016; 145:1-10. [DOI: 10.1016/j.jecp.2015.12.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 12/08/2015] [Accepted: 12/11/2015] [Indexed: 11/23/2022]
|
47
|
Gogate L, Maganti M. The Dynamics of Infant Attention: Implications for Crossmodal Perception and Word-Mapping Research. Child Dev 2016; 87:345-64. [PMID: 27015082 DOI: 10.1111/cdev.12509] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The present review is a novel synthesis of research on infants' attention in two related domains-crossmodal perception and word mapping. The authors hypothesize that infant attention is malleable and shifts in real time. They review dynamic models of infant attention and provide empirical evidence for parallel trends in attention shifts from the two domains that support their hypothesis. When infants are exposed to competing auditory-visual stimuli in experiments, multiple factors cause attention to shift during infant-environment interactions. Additionally, attention shifts across nested timescales and individual variations in attention systematically explain development. They suggest future research to further elucidate the causal mechanisms that influence infants' attention dynamics, emphasizing the need to examine individual variations that index shifts over time.
Collapse
Affiliation(s)
- Lakshmi Gogate
- Florida Gulf Coast University.,University of Missouri, Columbia
| | | |
Collapse
|
48
|
Rader NDV, Zukow-Goldring P. The Role of Speech-Gesture Synchrony in Clipping Words From the Speech Stream: Evidence From Infant Pupil Responses. ECOLOGICAL PSYCHOLOGY 2015. [DOI: 10.1080/10407413.2015.1086226] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
49
|
Gogate L, Maganti M, Bahrick LE. Cross-cultural evidence for multimodal motherese: Asian Indian mothers' adaptive use of synchronous words and gestures. J Exp Child Psychol 2015; 129:110-26. [PMID: 25285369 PMCID: PMC4252564 DOI: 10.1016/j.jecp.2014.09.002] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2014] [Revised: 09/02/2014] [Accepted: 09/05/2014] [Indexed: 10/24/2022]
Abstract
In a quasi-experimental study, 24 Asian Indian mothers were asked to teach novel (target) names for two objects and two actions to their children of three different levels of lexical mapping development: prelexical (5-8 months), early lexical (9-17 months), and advanced lexical (20-43 months). Target naming (n=1482) and non-target naming (other, n=2411) were coded for synchronous spoken words and object motion (multimodal motherese) and other naming styles. Indian mothers abundantly used multimodal motherese with target words to highlight novel word-referent relations, paralleling earlier findings from American mothers. They used it with target words more often for prelexical infants than for advanced lexical children and to name target actions later in children's development. Unlike American mothers, Indian mothers also abundantly used multimodal motherese to name target objects later in children's development. Finally, monolingual mothers who spoke a verb-dominant Indian language used multimodal motherese more often than bilingual mothers who also spoke noun-dominant English to their children. The findings suggest that within a dynamic and reciprocal mother-infant communication system, multimodal motherese adapts to unify novel words and referents across cultures. It adapts to children's level of lexical development and to ambient language-specific lexical dominance hierarchies.
Collapse
Affiliation(s)
- Lakshmi Gogate
- Psychology, Florida Gulf Coast University, Fort Myers, FL 33965
| | - Madhavilatha Maganti
- University of Hyderabad, Center for Neural and Cognitive Sciences, Gachibowli, Hyderabad, Andhra Pradesh, India
| | - Lorraine E. Bahrick
- Psychology, Florida International University, DM Building, Miami, Florida 33199
| |
Collapse
|
50
|
Patten E, Watson LR, Baranek GT. Temporal Synchrony Detection and Associations with Language in Young Children with ASD. AUTISM RESEARCH AND TREATMENT 2014; 2014:678346. [PMID: 25614835 PMCID: PMC4295130 DOI: 10.1155/2014/678346] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2014] [Revised: 12/03/2014] [Accepted: 12/09/2014] [Indexed: 11/17/2022]
Abstract
Temporally synchronous audio-visual stimuli serve to recruit attention and enhance learning, including language learning in infants. Although few studies have examined this effect on children with autism, it appears that the ability to detect temporal synchrony between auditory and visual stimuli may be impaired, particularly given social-linguistic stimuli delivered via oral movement and spoken language pairings. However, children with autism can detect audio-visual synchrony given nonsocial stimuli (objects dropping and their corresponding sounds). We tested whether preschool children with autism could detect audio-visual synchrony given video recordings of linguistic stimuli paired with movement of related toys in the absence of faces. As a group, children with autism demonstrated the ability to detect audio-visual synchrony. Further, the amount of time they attended to the synchronous condition was positively correlated with receptive language. Findings suggest that object manipulations may enhance multisensory processing in linguistic contexts. Moreover, associations between synchrony detection and language development suggest that better processing of multisensory stimuli may guide and direct attention to communicative events thus enhancing linguistic development.
Collapse
Affiliation(s)
- Elena Patten
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, 434 South Stadium Hall, Knoxville, TN 37996, USA
| | - Linda R. Watson
- Division of Speech & Hearing Sciences, CB No. 7190, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Grace T. Baranek
- Division of Occupational Sciences, CB No. 7122, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|