1
|
Dehaene-Lambertz G. Perceptual Awareness in Human Infants: What is the Evidence? J Cogn Neurosci 2024; 36:1599-1609. [PMID: 38527095 DOI: 10.1162/jocn_a_02149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Perceptual awareness in infants during the first year of life is understudied, despite the philosophical, scientific, and clinical importance of understanding how and when consciousness emerges during human brain development. Although parents are undoubtedly convinced that their infant is conscious, the lack of adequate experimental paradigms to address this question in preverbal infants has been a hindrance to research on this topic. However, recent behavioral and brain imaging studies have shown that infants are engaged in complex learning from an early age and that their brains are more structured than traditionally thought. I will present a rapid overview of these results, which might provide indirect evidence of early perceptual awareness and then describe how a more systematic approach to this question could stand within the framework of global workspace theory, which identifies specific signatures of conscious perception in adults. Relying on these brain signatures as a benchmark for conscious perception, we can deduce that it exists in the second half of the first year, whereas the evidence before the age of 5 months is less solid, mainly because of the paucity of studies. The question of conscious perception before term remains open, with the possibility of short periods of conscious perception, which would facilitate early learning. Advances in brain imaging and growing interest in this subject should enable us to gain a better understanding of this important issue in the years to come.
Collapse
|
2
|
Casillas M, Casey K. Daylong egocentric recordings in small- and large-scale language communities: A practical introduction. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:29-53. [PMID: 39074924 DOI: 10.1016/bs.acdb.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Daylong egocentric (i.e., participant-centered) recordings promise an unprecedented view into the experiences that drive early language learning, impacting both assumptions and theories about how learning happens. Thanks to recent advances in technology, collecting long-form audio, photo, and video recordings with child-worn devices is cheaper and more convenient than ever. These recording methods can be similarly deployed across small- and large-scale language communities around the world, opening up enormous possibilities for comparative research on early language development. However, building new high-quality naturalistic corpora is a massive investment of time and money. In this chapter, we provide a practical look into considerations relevant for developing and managing daylong egocentric recording projects: Is it possible to re-use existing data? How much time will manual annotation take? Can automated tools sufficiently tackle the questions at hand? We conclude by outlining two exciting directions for future naturalistic child language research.
Collapse
Affiliation(s)
- Marisa Casillas
- Comparative Human Development Department, University of Chicago.
| | | |
Collapse
|
3
|
Schroer SE, Yu C. Word learning is hands-on: Insights from studying natural behavior. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:55-79. [PMID: 39074925 DOI: 10.1016/bs.acdb.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Infants' interactions with social partners are richly multimodal. Dyads respond to and coordinate their visual attention, gestures, vocalizations, speech, manual actions, and manipulations of objects. Although infants are typically described as active learners, previous experimental research has often focused on how infants learn from stimuli that is well-crafted by researchers. Recent research studying naturalistic, free-flowing interactions has explored the meaningful patterns in dyadic behavior that relate to language learning. Infants' manual engagement and exploration of objects supports their visual attention, creates salient and diverse views of objects, and elicits labeling utterances from parents. In this chapter, we discuss how the cascade of behaviors created by infant multimodal attention plays a fundamental role in shaping their learning environment, supporting real-time word learning and predicting later vocabulary size. We draw from recent at-home and cross-cultural research to test the validity of our mechanistic pathway and discuss why hands matter so much for learning. Our goal is to convey the critical need for developmental scientists to study natural behavior and move beyond our "tried-and-true" paradigms, like screen-based tasks. By studying natural behavior, the role of infants' hands in early language learning was revealed-though it was a behavior that was often uncoded, undiscussed, or not even allowed in decades of previous research. When we study infants in their natural environment, they can show us how they learn about and explore their world. Word learning is hands-on.
Collapse
Affiliation(s)
- Sara E Schroer
- The Center for Perceptual Systems, The University of Texas at Austin; Department of Psychology, The University of Texas at Austin.
| | - Chen Yu
- The Center for Perceptual Systems, The University of Texas at Austin; Department of Psychology, The University of Texas at Austin
| |
Collapse
|
4
|
Weaver H, Zettersten M, Saffran JR. Becoming word meaning experts: Infants' processing of familiar words in the context of typical and atypical exemplars. Child Dev 2024. [PMID: 38822689 DOI: 10.1111/cdev.14120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
How do infants become word meaning experts? This registered report investigated the structure of infants' early lexical representations by manipulating the typicality of exemplars from familiar animal categories. 14- to 18-month-old infants (N = 84; 51 female; M = 15.7 months; race/ethnicity: 64% White, 8% Asian, 2% Hispanic, 1% Black, and 23% multiple categories; participating 2022-2023) were tested on their ability to recognize typical and atypical category exemplars after hearing familiar basic-level category labels. Infants robustly recognized both typical (d = 0.79, 95% CI [0.54, 1.03]) and atypical (d = 0.70, 95% CI [0.46, 0.94]) exemplars, with no significant difference between typicality conditions (d = 0.14, 95% CI [-0.08, 0.35]). These results support a broad-to-narrow account of infants' early word meanings. Implications for the role of experience in the development of lexical knowledge are discussed.
Collapse
Affiliation(s)
- Haley Weaver
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Martin Zettersten
- Department of Psychology, Princeton University, Princeton, New Jersey, USA
| | - Jenny R Saffran
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
5
|
Breitfeld E, Saffran JR. Early word learning is influenced by physical environments. Child Dev 2024; 95:962-971. [PMID: 38018684 PMCID: PMC11023760 DOI: 10.1111/cdev.14046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 10/26/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023]
Abstract
During word learning moments, toddlers experience labels and objects in particular environments. Do toddlers learn words better when the physical environment creates contrasts between objects with different labels? Thirty-six 21- to 24-month-olds (92% White, 22 female, data collected 8/21-4/22) learned novel words for novel objects presented using an apparatus that mimicked a shape-sorter toy. The manipulation concerned whether or not the physical features of the environments in which objects occurred heightened the contrasts between the objects. Toddlers only learned labels for objects presented in environments where the apparatus heightened the contrast between the objects (b = .068). These results emphasize the importance of investigating word learning in physical environments that more closely approximate young children's everyday experiences with objects.
Collapse
Affiliation(s)
- Elise Breitfeld
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Jenny R Saffran
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
6
|
Wojcik EH, Pierce MC, Stevens G, Goulding SJ. Referent-oriented interactions in infancy: A naturalistic, longitudinal case study from an English-speaking household. Infant Behav Dev 2024; 74:101911. [PMID: 38056189 DOI: 10.1016/j.infbeh.2023.101911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 10/14/2023] [Accepted: 11/16/2023] [Indexed: 12/08/2023]
Abstract
Caregivers use a of combination labeling, pointing, object grasping, and gaze to communicate with infants about referents in their environment. By two years of age, children reliably use these referent-oriented cues to communicate and learn. While there is some evidence from lab-based studies that younger infants attend to and use referent-oriented cues during communication, some more naturalistic studies have found that in the first year of life, infants do not robustly leverage these cues during dyadic interactions. The current study examined parent and infant gaze, touching, pointing, and reaching to referents for a wide range of nouns, verbs, adjectives, and other early-learned words during 59 one-hour head-camera recordings sampled from one English-learning infants' life between 6 and 12 months of age. We found substantial variability across individual words for all cues. Some variability was explained by referent concreteness and the grammatical category of the label. The parent's touching of labeled referents increased across months, suggesting that parent-infant-referent interactions may change with development. Future studies should investigate the trajectories of specific types of words and contexts, rather than attempting to discover possibly non-existent universal trajectories of parent and infant referent-oriented behaviors.
Collapse
|
7
|
Gordon KR, Grieco-Calub TM. Children build their vocabularies in noisy environments: The necessity of a cross-disciplinary approach to understand word learning. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1671. [PMID: 38043926 PMCID: PMC10939936 DOI: 10.1002/wcs.1671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 11/07/2023] [Accepted: 11/08/2023] [Indexed: 12/05/2023]
Abstract
Research within the language sciences has informed our understanding of how children build vocabulary knowledge especially during early childhood and the early school years. However, to date, our understanding of word learning in children is based primarily on research in quiet laboratory settings. The everyday environments that children inhabit such as schools, homes, and day cares are typically noisy. To better understand vocabulary development, we need to understand the effects of background noise on word learning. To gain this understanding, a cross-disciplinary approach between researchers in the language and hearing sciences in partnership with parents, educators, and clinicians is ideal. Through this approach we can identify characteristics of effective vocabulary instruction that take into account the background noise present in children's learning environments. Furthermore, we can identify characteristics of children who are likely to struggle with learning words in noisy environments. For example, differences in vocabulary knowledge, verbal working memory abilities, and attention skills will likely influence children's ability to learn words in the presence of background noise. These children require effective interventions to support their vocabulary development which subsequently should support their ability to process and learn language in noisy environments. Overall, this cross-disciplinary approach will inform theories of language development and inform educational and intervention practices designed to support children's vocabulary development. This article is categorized under: Psychology > Language Psychology > Learning Psychology > Theory and Methods.
Collapse
|
8
|
Vong WK, Wang W, Orhan AE, Lake BM. Grounded language acquisition through the eyes and ears of a single child. Science 2024; 383:504-511. [PMID: 38300999 DOI: 10.1126/science.adi1374] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 12/31/2023] [Indexed: 02/03/2024]
Abstract
Starting around 6 to 9 months of age, children begin acquiring their first words, linking spoken words to their visual counterparts. How much of this knowledge is learnable from sensory input with relatively generic learning mechanisms, and how much requires stronger inductive biases? Using longitudinal head-mounted camera recordings from one child aged 6 to 25 months, we trained a relatively generic neural network on 61 hours of correlated visual-linguistic data streams, learning feature-based representations and cross-modal associations. Our model acquires many word-referent mappings present in the child's everyday experience, enables zero-shot generalization to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child's input.
Collapse
Affiliation(s)
- Wai Keen Vong
- Center for Data Science, New York University, New York, NY, USA
| | - Wentao Wang
- Center for Data Science, New York University, New York, NY, USA
| | - A Emin Orhan
- Center for Data Science, New York University, New York, NY, USA
| | - Brenden M Lake
- Center for Data Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
9
|
Campbell E, Casillas R, Bergelson E. The role of vision in the acquisition of words: Vocabulary development in blind toddlers. Dev Sci 2024:e13475. [PMID: 38229227 DOI: 10.1111/desc.13475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 12/07/2023] [Accepted: 12/26/2023] [Indexed: 01/18/2024]
Abstract
What is vision's role in driving early word production? To answer this, we assessed parent-report vocabulary questionnaires administered to congenitally blind children (N = 40, Mean age = 24 months [R: 7-57 months]) and compared the size and contents of their productive vocabulary to those of a large normative sample of sighted children (N = 6574). We found that on average, blind children showed a roughly half-year vocabulary delay relative to sighted children, amid considerable variability. However, the content of blind and sighted children's vocabulary was statistically indistinguishable in word length, part of speech, semantic category, concreteness, interactiveness, and perceptual modality. At a finer-grained level, we also found that words' perceptual properties intersect with children's perceptual abilities. Our findings suggest that while an absence of visual input may initially make vocabulary development more difficult, the content of the early productive vocabulary is largely resilient to differences in perceptual access. RESEARCH HIGHLIGHTS: Infants and toddlers born blind (with no other diagnoses) show a 7.5 month productive vocabulary delay on average, with wide variability. Across the studied age range (7-57 months), vocabulary delays widened with age. Blind and sighted children's early vocabularies contain similar distributions of word lengths, parts of speech, semantic categories, and perceptual modalities. Blind children (but not sighted children) were more likely to say visual words which could also be experienced through other senses.
Collapse
Affiliation(s)
- Erin Campbell
- Duke University, Durham, North Carolina, USA
- Wheelock College of Education & Human Development, Boston University, Boston, Massachusetts, USA
| | | | - Elika Bergelson
- Duke University, Durham, North Carolina, USA
- Department of Psychology, Harvard University, Cambridge, Massachusetts, USA
| |
Collapse
|
10
|
Seidl AH, Indarjit M, Borovsky A. Touch to learn: Multisensory input supports word learning and processing. Dev Sci 2024; 27:e13419. [PMID: 37291692 PMCID: PMC10704002 DOI: 10.1111/desc.13419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 04/14/2023] [Accepted: 05/22/2023] [Indexed: 06/10/2023]
Abstract
Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning.
Collapse
Affiliation(s)
- Amanda H Seidl
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Michelle Indarjit
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Arielle Borovsky
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
11
|
Aho K, Roads BD, Love BC. Signatures of cross-modal alignment in children's early concepts. Proc Natl Acad Sci U S A 2023; 120:e2309688120. [PMID: 37819984 PMCID: PMC10589699 DOI: 10.1073/pnas.2309688120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/13/2023] Open
Abstract
Whether supervised or unsupervised, human and machine learning is usually characterized as event-based. However, learning may also proceed by systems alignment in which mappings are inferred between entire systems, such as visual and linguistic systems. Systems alignment is possible because items that share similar visual contexts, such as a car and a truck, will also tend to share similar linguistic contexts. Because of the mirrored similarity relationships across systems, the visual and linguistic systems can be aligned at some later time absent either input. In a series of simulation studies, we considered whether children's early concepts support systems alignment. We found that children's early concepts are close to optimal for inferring novel concepts through systems alignment, enabling agents to correctly infer more than 85% of visual-word mappings absent supervision. One possible explanation for why children's early concepts support systems alignment is that they are distinguished structurally by their dense semantic neighborhoods. Artificial agents using these structural features to select concepts proved highly effective, both in environments mirroring children's conceptual world and those that exclude the concepts that children commonly acquire. For children, systems alignment and event-based learning likely complement one another. Likewise, artificial systems can benefit from incorporating these developmental principles.
Collapse
Affiliation(s)
- Kaarina Aho
- Department of Experimental Psychology, University College London, LondonWC1H 0AP, United Kingdom
| | - Brett D. Roads
- Department of Experimental Psychology, University College London, LondonWC1H 0AP, United Kingdom
| | - Bradley C. Love
- Department of Experimental Psychology, University College London, LondonWC1H 0AP, United Kingdom
- The Alan Turing Institute, LondonNW1 2DB, United Kingdom
| |
Collapse
|
12
|
Carr TH, Arrington CM, Fitzpatrick SM. Integrating cognition in the laboratory with cognition in the real world: the time cognition takes, task fidelity, and finding tasks when they are mixed together. Front Psychol 2023; 14:1137698. [PMID: 37691795 PMCID: PMC10491893 DOI: 10.3389/fpsyg.2023.1137698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 06/19/2023] [Indexed: 09/12/2023] Open
Abstract
It is now possible for real-life activities, unfolding over their natural range of temporal and spatial scales, to become the primary targets of cognitive studies. Movement toward this type of research will require an integrated methodological approach currently uncommon in the field. When executed hand in hand with thorough and ecologically valid empirical description, properly developed laboratory tasks can serve as model systems to capture the essentials of a targeted real-life activity. When integrated together, data from these two kinds of studies can facilitate causal analysis and modeling of the mental and neural processes that govern that activity, enabling a fuller account than either method can provide on its own. The resulting account, situated in the activity's natural environmental, social, and motivational context, can then enable effective and efficient development of interventions to support and improve the activity as it actually unfolds in real time. We believe that such an integrated multi-level research program should be common rather than rare and is necessary to achieve scientifically and societally important goals. The time is right to finally abandon the boundaries that separate the laboratory from the outside world.
Collapse
Affiliation(s)
- Thomas H. Carr
- Program in Cognition and Cognitive Neuroscience, Department of Psychology, Michigan State University, East Lansing, MI, United States
| | | | - Susan M. Fitzpatrick
- LSRT Associates, St. Louis, MO, United States
- James S. McDonnell Foundation, St. Louis, MO, United States
| |
Collapse
|