1
|
Filipe MG, Severino C, Vigário M, Frota S. Development and validation of a parental report of toddlers' prosodic skills. CLINICAL LINGUISTICS & PHONETICS 2024; 38:509-528. [PMID: 37348063 DOI: 10.1080/02699206.2023.2226302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 06/12/2023] [Indexed: 06/24/2023]
Abstract
This study describes the development and validation of Proso-Quest, a parental report of toddlers' prosodic skills that aims to assess early prosodic development in European Portuguese. The development and validation of Proso-Quest proceeded in three phases. Phase 1 was undertaken (a) to establish the structure of the parental report and select the items considering previous work, (b) to collect input from experts on prosodic development, and (c) to revise the report after a pilot study. Phase 2 examined internal consistency, reliability, test-retest reliability, and correlations between Proso-Quest and a valid measure of vocabulary development. Finally, Phase 3 evaluated the discriminant validity of this report in a clinical sample that frequently presents prosodic impairments. The psychometric properties of Proso-Quest indicated an excellent internal consistency, high test-retest reliability, significant correlations with a valid measure of vocabulary development, and sensitivity to identify prosodic delays. This parental report showed evidence of reliability and validity in describing early prosodic development and impairment, and it may be a useful tool in research and educational assessments, as well as in clinical-based assessments.
Collapse
Affiliation(s)
- Marisa G Filipe
- Center of Linguistics, University of Lisbon, Lisbon, Portugal
| | - Cátia Severino
- Center of Linguistics, University of Lisbon, Lisbon, Portugal
| | - Marina Vigário
- Center of Linguistics, University of Lisbon, Lisbon, Portugal
| | - Sónia Frota
- Center of Linguistics, University of Lisbon, Lisbon, Portugal
| |
Collapse
|
2
|
Marimon M, Langus A, Höhle B. Prosody outweighs statistics in 6-month-old German-learning infants' speech segmentation. INFANCY 2024. [PMID: 38703064 DOI: 10.1111/infa.12593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2024]
Abstract
It is well established that infants use various cues to find words within fluent speech from about 7 to 8 months of age. Research suggests that two main mechanisms support infants' speech segmentation: prosodic cues like the word stress patterns, and distributional cues like transitional probabilities (TPs). We tested 6-month-old German-learning infants' use of prosodic and statistical cues for speech segmentation in three experiments. In Experiment 1, infants were familiarized with an artificial language string where TPs signaled either word boundaries or iambic words-a stress pattern that is disfavored in German. Experiment 2 was a control and only the test phase was presented. In Experiment 3, prosodic cues were absent in the string and only TPs signaled word boundaries. All experiments included the same conditions at test: disyllabic words with high TPs in the string, words with low TPs and words with non-co-occurring syllables. Results showed that infants relied more strongly on prosodic cues than on TPs for word segmentation. Notably, no segmentation evidence emerged when prosodic cues were absent in the string. This finding underlines early impacts of language-specific structural properties on segmentation mechanisms.
Collapse
Affiliation(s)
- Mireia Marimon
- Department of Linguistics, Cognitive Sciences, University of Potsdam, Potsdam, Germany
- Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain
| | - Alan Langus
- Department of Linguistics, Cognitive Sciences, University of Potsdam, Potsdam, Germany
| | - Barbara Höhle
- Department of Linguistics, Cognitive Sciences, University of Potsdam, Potsdam, Germany
| |
Collapse
|
3
|
Luo Q, Gao L, Yang Z, Chen S, Yang J, Lu S. Integrated sentence-level speech perception evokes strengthened language networks and facilitates early speech development. Neuroimage 2024; 289:120544. [PMID: 38365164 DOI: 10.1016/j.neuroimage.2024.120544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 12/23/2023] [Accepted: 02/14/2024] [Indexed: 02/18/2024] Open
Abstract
Natural poetic speeches (i.e., proverbs, nursery rhymes, and commercial ads) with strong prosodic regularities are easily memorized by children and the harmonious acoustic patterns are suggested to facilitate their integrated sentence processing. Do children have specific neural pathways for perceiving such poetic utterances, and does their speech development benefit from it? We recorded the task-induced hemodynamic changes of 94 children aged 2 to 12 years using functional near-infrared spectroscopy (fNIRS) while they listened to poetic and non-poetic natural sentences. Seventy-three adult as controls were recruited to investigate the developmental specificity of children group. The results indicated that poetic sentences perceiving is a highly integrated process featured by a lower brain workload in both groups. However, an early activated large-scale network was induced only in the child group, coordinated by hubs for connectivity diversity. Additionally, poetic speeches evoked activation in the phonological encoding regions in the children's group rather than adult controls which decreases with children's ages. The neural responses to poetic speeches were positively linked to children's speech communication performance, especially the fluency and semantic aspects. These results reveal children's neural sensitivity to integrated speech perception which facilitate early speech development by strengthening more sophisticated language networks and the perception-production circuit.
Collapse
Affiliation(s)
- Qinqin Luo
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Chinese Language and Literature, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Leyan Gao
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China
| | - Zhirui Yang
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Sihui Chen
- Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China
| | - Shuo Lu
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
4
|
Yu L, Huang D, Wang S, Zhang Y. Reduced Neural Specialization for Word-level Linguistic Prosody in Children with Autism. J Autism Dev Disord 2023; 53:4351-4367. [PMID: 36038793 DOI: 10.1007/s10803-022-05720-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2022] [Indexed: 10/15/2022]
Abstract
Children with autism often show atypical brain lateralization for speech and language processing, however, it is unclear what linguistic component contributes to this phenomenon. Here we measured event-related potential (ERP) responses in 21 school-age autistic children and 25 age-matched neurotypical (NT) peers during listening to word-level prosodic stimuli. We found that both groups displayed larger late negative response (LNR) amplitude to native prosody than to nonnative prosody; however, unlike the NT group exhibiting left-lateralized LNR distinction of prosodic phonology, the autism group showed no evidence of LNR lateralization. Moreover, in both groups, the LNR effects were only present for prosodic phonology but not for phoneme-free prosodic acoustics. These results extended the findings of inadequate neural specialization for language in autism to sub-lexical prosodic structures.
Collapse
Affiliation(s)
- Luodi Yu
- Center for Autism Research, School of Education, Guangzhou University, Wenyi Bldg, Guangzhou, China.
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University) , Ministry of Education, Guangzhou, China.
| | - Dan Huang
- Guangzhou Rehabilitation & Research Center for Children with ASD, Guangzhou Cana School, Guangzhou, China
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University) , Ministry of Education, Guangzhou, China.
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
5
|
Santolin C, Crespo-Bojorque P, Sebastian-Galles N, Toro JM. Sensitivity to the sonority sequencing principle in rats (Rattus norvegicus). Sci Rep 2023; 13:17036. [PMID: 37813950 PMCID: PMC10562444 DOI: 10.1038/s41598-023-44081-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 10/03/2023] [Indexed: 10/11/2023] Open
Abstract
Albeit diverse, human languages exhibit universal structures. A salient example is the syllable, an important structure of language acquisition. The structure of syllables is determined by the Sonority Sequencing Principle (SSP), a linguistic constraint according to which phoneme intensity must increase at onset, reaching a peak at nucleus (vowel), and decline at offset. Such structure generates an intensity pattern with an arch shape. In humans, sensitivity to restrictions imposed by the SSP on syllables appears at birth, raising questions about its emergence. We investigated the biological mechanisms at the foundations of the SSP, testing a nonhuman, non-vocal-learner species with the same language materials used with humans. Rats discriminated well-structured syllables (e.g., pras) from ill-structured ones (e.g., lbug) after being familiarized with syllabic structures conforming to the SSP. In contrast, we did not observe evidence that rats familiarized with syllables that violate such constraint discriminated at test. This research provides the first evidence of sensitivity to the SSP in a nonhuman species, which likely stems from evolutionary-ancient cross-species biological predispositions for natural acoustic patterns. Humans' early sensitivity to the SSP possibly emerges from general auditory processing that favors sounds depicting an arch-shaped envelope, common amongst animal vocalizations. Ancient sensory mechanisms, responsible for processing vocalizations in the wild, would constitute an entry-gate for human language acquisition.
Collapse
Affiliation(s)
- Chiara Santolin
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.
| | | | | | - Juan Manuel Toro
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
- Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| |
Collapse
|
6
|
Dal Ben R, Prequero IT, Souza DDH, Hay JF. Speech Segmentation and Cross-Situational Word Learning in Parallel. Open Mind (Camb) 2023; 7:510-533. [PMID: 37637304 PMCID: PMC10449405 DOI: 10.1162/opmi_a_00095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 08/29/2023] Open
Abstract
Language learners track conditional probabilities to find words in continuous speech and to map words and objects across ambiguous contexts. It remains unclear, however, whether learners can leverage the structure of the linguistic input to do both tasks at the same time. To explore this question, we combined speech segmentation and cross-situational word learning into a single task. In Experiment 1, when adults (N = 60) simultaneously segmented continuous speech and mapped the newly segmented words to objects, they demonstrated better performance than when either task was performed alone. However, when the speech stream had conflicting statistics, participants were able to correctly map words to objects, but were at chance level on speech segmentation. In Experiment 2, we used a more sensitive speech segmentation measure to find that adults (N = 35), exposed to the same conflicting speech stream, correctly identified non-words as such, but were still unable to discriminate between words and part-words. Again, mapping was above chance. Our study suggests that learners can track multiple sources of statistical information to find and map words to objects in noisy environments. It also prompts questions on how to effectively measure the knowledge arising from these learning experiences.
Collapse
Affiliation(s)
- Rodrigo Dal Ben
- Universidade Federal de São Carlos, São Carlos, São Paulo, Brazil
| | | | | | | |
Collapse
|
7
|
Martinez-Alvarez A, Benavides-Varela S, Lapillonne A, Gervain J. Newborns discriminate utterance-level prosodic contours. Dev Sci 2023; 26:e13304. [PMID: 35841609 DOI: 10.1111/desc.13304] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 05/13/2022] [Accepted: 06/27/2022] [Indexed: 11/29/2022]
Abstract
Prosody is the fundamental organizing principle of spoken language, carrying lexical, morphosyntactic, and pragmatic information. It, therefore, provides highly relevant input for language development. Are infants sensitive to this important aspect of spoken language early on? In this study, we asked whether infants are able to discriminate well-formed utterance-level prosodic contours from ill-formed, backward prosodic contours at birth. This deviant prosodic contour was obtained by time-reversing the original one, and super-imposing it on the otherwise intact segmental information. The resulting backward prosodic contour was thus unfamiliar to the infants and ill-formed in French. We used near-infrared spectroscopy (NIRS) in 1-3-day-old French newborns (n = 25) to measure their brain responses to well-formed contours as standards and their backward prosody counterparts as deviants in the frontal, temporal, and parietal areas bilaterally. A cluster-based permutation test revealed greater responses to the Deviant than to the Standard condition in right temporal areas. These results suggest that newborns are already capable of detecting utterance-level prosodic violations at birth, a key ability for breaking into the native language, and that this ability is supported by brain areas similar to those in adults. RESEARCH HIGHLIGHTS: At birth, infants have sophisticated speech perception abilities. Prosody may be particularly important for early language development. We show that newborns are already capable of discriminating utterance-level prosodic contours. This discrimination can be localized to the right hemisphere of the neonate brain.
Collapse
Affiliation(s)
- Anna Martinez-Alvarez
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy.,Integrative Neuroscience and Cognition Center, Université Paris Cité & CNRS, Paris, France
| | | | - Alexandre Lapillonne
- Hôpital Necker - Enfants Malades, Department of Neonatology, Université Paris Cité, Paris, France
| | - Judit Gervain
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy.,Integrative Neuroscience and Cognition Center, Université Paris Cité & CNRS, Paris, France
| |
Collapse
|
8
|
Benjamin L, Fló A, Palu M, Naik S, Melloni L, Dehaene-Lambertz G. Tracking transitional probabilities and segmenting auditory sequences are dissociable processes in adults and neonates. Dev Sci 2023; 26:e13300. [PMID: 35772033 DOI: 10.1111/desc.13300] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 05/31/2022] [Accepted: 06/11/2022] [Indexed: 11/27/2022]
Abstract
Since speech is a continuous stream with no systematic boundaries between words, how do pre-verbal infants manage to discover words? A proposed solution is that they might use the transitional probability between adjacent syllables, which drops at word boundaries. Here, we tested the limits of this mechanism by increasing the size of the word-unit to four syllables, and its automaticity by testing asleep neonates. Using markers of statistical learning in neonates' EEG, compared to adult behavioral performances in the same task, we confirmed that statistical learning is automatic enough to be efficient even in sleeping neonates. We also revealed that: (1) Successfully tracking transition probabilities (TP) in a sequence is not sufficient to segment it. (2) Prosodic cues, as subtle as subliminal pauses, enable to recover words segmenting capacities. (3) Adults' and neonates' capacities to segment streams seem remarkably similar despite the difference of maturation and expertise. Finally, we observed that learning increased the overall similarity of neural responses across infants during exposure to the stream, providing a novel neural marker to monitor learning. Thus, from birth, infants are equipped with adult-like tools, allowing them to extract small coherent word-like units from auditory streams, based on the combination of statistical analyses and auditory parsing cues. RESEARCH HIGHLIGHTS: Successfully tracking transitional probabilities in a sequence is not always sufficient to segment it. Word segmentation solely based on transitional probability is limited to bi- or tri-syllabic elements. Prosodic cues, as subtle as subliminal pauses, enable to recover chunking capacities in sleeping neonates and awake adults for quadriplets.
Collapse
Affiliation(s)
- Lucas Benjamin
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, Île-de-France, France
| | - Ana Fló
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, Île-de-France, France
| | - Marie Palu
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, Île-de-France, France
| | - Shruti Naik
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, Île-de-France, France
| | - Lucia Melloni
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Hessen, Germany.,Department of Neurology, NYU Grossman School of Medicine, New York City, New York, USA
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, Gif-sur-Yvette, Île-de-France, France
| |
Collapse
|
9
|
Endress AD, Johnson SP. Hebbian, correlational learning provides a memory-less mechanism for Statistical Learning irrespective of implementational choices: Reply to Tovar and Westermann (2022). Cognition 2023; 230:105290. [PMID: 36240613 DOI: 10.1016/j.cognition.2022.105290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/30/2022] [Accepted: 09/17/2022] [Indexed: 11/07/2022]
Abstract
Statistical learning relies on detecting the frequency of co-occurrences of items and has been proposed to be crucial for a variety of learning problems, notably to learn and memorize words from fluent speech. Endress and Johnson (2021) (hereafter EJ) recently showed that such results can be explained based on simple memory-less correlational learning mechanisms such as Hebbian Learning. Tovar and Westermann (2022) (hereafter TW) reproduced these results with a different Hebbian model. We show that the main differences between the models are whether temporal decay acts on both the connection weights and the activations (in TW) or only on the activations (in EJ), and whether interference affects weights (in TW) or activations (in EJ). Given that weights and activations are linked through the Hebbian learning rule, the networks behave similarly. However, in contrast to TW, we do not believe that neurophysiological data are relevant to adjudicate between abstract psychological models with little biological detail. Taken together, both models show that different memory-less correlational learning mechanisms provide a parsimonious account of Statistical Learning results. They are consistent with evidence that Statistical Learning might not allow learners to learn and retain words, and Statistical Learning might support predictive processing instead.
Collapse
Affiliation(s)
| | - Scott P Johnson
- Department of Psychology, University of California, Los Angeles, United States of America
| |
Collapse
|
10
|
Conwell E, Horvath G, Kuznia A, Agauas SJ. Developmental consistency in the use of subphonemic information during real-time sentence processing. LANGUAGE, COGNITION AND NEUROSCIENCE 2022; 38:860-871. [PMID: 37521203 PMCID: PMC10373946 DOI: 10.1080/23273798.2022.2159993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 12/07/2022] [Indexed: 08/01/2023]
Abstract
Apparently homophonous sequences contain acoustic information that differentiates their meanings (Gahl, 2008; Quené, 1992). Adults use this information to segment embedded homophones (e.g., ham vs. hamster; Salverda, et al., 2003) in fluent speech. Whether children also do this is unknown, as is whether listeners of any age use such information to disambiguate lexical homophones. In two experiments, 48 English-speaking adults and 48 English-speaking 7- to- 10-year-old children viewed sets of four images and heard sentences containing phonemically identical sequences while their eye movements were continuously tracked. As in previous research, adults showed greater fixation of target meanings when the acoustic properties of an embedded homophone were consistent with the target than when they were consistent with the alternate interpretation. They did not show this difference for lexical homophones. Children's behavior was similar to that of adults, indicating that the use of subphonemic information in homophone processing is consistent over development.
Collapse
Affiliation(s)
- Erin Conwell
- Department of Psychology, North Dakota State University, Fargo, ND
| | | | - Allyson Kuznia
- Department of Psychology, University of Oregon, Eugene, OR
| | | |
Collapse
|
11
|
Chung W, Yang H. The relationship between oral language and storytelling prosody in preschool children. INFANT AND CHILD DEVELOPMENT 2022. [DOI: 10.1002/icd.2329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Wei‐Lun Chung
- Department of Special Education National Taipei University of Education Taipei Taiwan
| | - Hui‐Chun Yang
- Department of Special Education National Kaohsiung Normal University Kaohsiung City Taiwan
- Graduate Institute of Audiology and Speech Therapy National Kaohsiung Normal University Kaohsiung City Taiwan
| |
Collapse
|
12
|
Mechanisms of associative word learning: Benefits from the visual modality and synchrony of labeled objects. Cortex 2022; 152:36-52. [DOI: 10.1016/j.cortex.2022.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 12/05/2021] [Accepted: 03/30/2022] [Indexed: 11/21/2022]
|
13
|
Marimon M, Höhle B, Langus A. Pupillary entrainment reveals individual differences in cue weighting in 9-month-old German-learning infants. Cognition 2022; 224:105054. [PMID: 35217262 DOI: 10.1016/j.cognition.2022.105054] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 01/30/2022] [Accepted: 01/31/2022] [Indexed: 02/08/2023]
Abstract
Young infants can segment continuous speech with statistical as well as prosodic cues. Understanding how these cues interact can be informative about how infants solve the segmentation problem. Here we investigate how German-speaking adults and 9-month-old German-learning infants weigh statistical and prosodic cues when segmenting continuous speech. We measured participants' pupil size while they were familiarized with a continuous speech stream where prosodic cues were pitted off against transitional probabilities. Adult participants' changes in pupil size synchronized with the occurrence of prosodic words during the familiarization and the temporal alignment of these pupillary changes was predictive of adult participants' performance at test. Further, 9-month-olds as a group failed to consistently segment the familiarization stream with prosodic or statistical cues. However, the variability in temporal alignment of the pupillary changes at word frequency showed that prosodic and statistical cues compete for dominance when segmenting continuous speech. A follow-up language development questionnaire at 40 months of age suggested that infants who entrained to prosodic words performed better on a vocabulary task and those infants who relied more on statistical cues performed better on grammatical tasks. Together these results suggest that statistics and prosody may serve different roles in speech segmentation in infancy.
Collapse
Affiliation(s)
- Mireia Marimon
- University of Potsdam, Cognitive Sciences, Department of Linguistics, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam, Germany
| | - Barbara Höhle
- University of Potsdam, Cognitive Sciences, Department of Linguistics, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam, Germany
| | - Alan Langus
- University of Potsdam, Cognitive Sciences, Department of Linguistics, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam, Germany.
| |
Collapse
|
14
|
Sanchez-Alonso S, Aslin RN. Towards a model of language neurobiology in early development. BRAIN AND LANGUAGE 2022; 224:105047. [PMID: 34894429 DOI: 10.1016/j.bandl.2021.105047] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 10/24/2021] [Accepted: 10/27/2021] [Indexed: 06/14/2023]
Abstract
Understanding language neurobiology in early childhood is essential for characterizing the developmental structural and functional changes that lead to the mature adult language network. In the last two decades, the field of language neurodevelopment has received increasing attention, particularly given the rapid advances in the implementation of neuroimaging techniques and analytic approaches that allow detailed investigations into the developing brain across a variety of cognitive domains. These methodological and analytical advances hold the promise of developing early markers of language outcomes that allow diagnosis and clinical interventions at the earliest stages of development. Here, we argue that findings in language neurobiology need to be integrated within an approach that captures the dynamic nature and inherent variability that characterizes the developing brain and the interplay between behavior and (structural and functional) neural patterns. Accordingly, we describe a framework for understanding language neurobiology in early development, which minimally requires an explicit characterization of the following core domains: i) computations underlying language learning mechanisms, ii) developmental patterns of change across neural and behavioral measures, iii) environmental variables that reinforce language learning (e.g., the social context), and iv) brain maturational constraints for optimal neural plasticity, which determine the infant's sensitivity to learning from the environment. We discuss each of these domains in the context of recent behavioral and neuroimaging findings and consider the need for quantitatively modeling two main sources of variation: individual differences or trait-like patterns of variation and within-subject differences or state-like patterns of variation. The goal is to enable models that allow prediction of language outcomes from neural measures that take into account these two types of variation. Finally, we examine how future methodological approaches would benefit from the inclusion of more ecologically valid paradigms that complement and allow generalization of traditional controlled laboratory methods.
Collapse
Affiliation(s)
| | - Richard N Aslin
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology, Yale University, New Haven, CT, USA; Child Study Center, Yale University, New Haven, CT, USA.
| |
Collapse
|
15
|
Does morphological complexity affect word segmentation? Evidence from computational modeling. Cognition 2021; 220:104960. [PMID: 34920298 DOI: 10.1016/j.cognition.2021.104960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 06/01/2021] [Accepted: 11/15/2021] [Indexed: 11/21/2022]
Abstract
How can infants detect where words or morphemes start and end in the continuous stream of speech? Previous computational studies have investigated this question mainly for English, where morpheme and word boundaries are often isomorphic. Yet in many languages, words are often multimorphemic, such that word and morpheme boundaries do not align. Our study employed corpora of two languages that differ in the complexity of inflectional morphology, Chintang (Sino-Tibetan) and Japanese (in Experiment 1), as well as corpora of artificial languages ranging in morphological complexity, as measured by the ratio and distribution of morphemes per word (in Experiments 2 and 3). We used two baselines and three conceptually diverse word segmentation algorithms, two of which rely purely on sublexical information using distributional cues, and one that builds a lexicon. The algorithms' performance was evaluated on both word- and morpheme-level representations of the corpora. Segmentation results were better for the morphologically simpler languages than for the morphologically more complex languages, in line with the hypothesis that languages with greater inflectional complexity could be more difficult to segment into words. We further show that the effect of morphological complexity is relatively small, compared to that of algorithm and evaluation level. We therefore recommend that infant researchers look for signatures of the different segmentation algorithms and strategies, before looking for differences in infant segmentation landmarks across languages varying in complexity.
Collapse
|
16
|
Yu L, Zeng J, Wang S, Zhang Y. Phonetic Encoding Contributes to the Processing of Linguistic Prosody at the Word Level: Cross-Linguistic Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4791-4801. [PMID: 34731592 DOI: 10.1044/2021_jslhr-21-00037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. METHOD Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic phonology (speech) conditions included disyllabic pseudowords spoken in Chinese and in English matched for syllabic structure, duration, and intensity. The prosodic acoustic (nonspeech) conditions were hummed versions of the speech stimuli, which eliminated the phonetic content while preserving the acoustic prosodic features. RESULTS We observed language-specific effects on the ERP that native stimuli elicited larger late negative response (LNR) amplitude than nonnative stimuli in the prosodic phonology conditions. However, no such effect was observed in the phoneme-free prosodic acoustic control conditions. CONCLUSIONS The results support the integration view that word-level linguistic prosody likely relies on the phonetic content where the acoustic cues embedded in. It remains to be examined whether the LNR may serve as a neural signature for language-specific processing of prosodic phonology beyond auditory processing of the critical acoustic cues at the suprasyllabic level.
Collapse
Affiliation(s)
- Luodi Yu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou
| | - Jiajing Zeng
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou
| | - Suiping Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
17
|
Ludusan B, Cristia A, Mazuka R, Dupoux E. How much does prosody help word segmentation? A simulation study on infant-directed speech. Cognition 2021; 219:104961. [PMID: 34856424 DOI: 10.1016/j.cognition.2021.104961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 07/01/2021] [Accepted: 11/15/2021] [Indexed: 11/03/2022]
Abstract
Infants come to learn several hundreds of word forms by two years of age, and it is possible this involves carving these forms out from continuous speech. It has been proposed that the task is facilitated by the presence of prosodic boundaries. We revisit this claim by running computational models of word segmentation, with and without prosodic information, on a corpus of infant-directed speech. We use five cognitively-based algorithms, which vary in whether they employ a sub-lexical or a lexical segmentation strategy and whether they are simple heuristics or embody an ideal learner. Results show that providing expert-annotated prosodic breaks does not uniformly help all segmentation models. The sub-lexical algorithms, which perform more poorly, benefit most, while the lexical ones show a very small gain. Moreover, when prosodic information is derived automatically from the acoustic cues infants are known to be sensitive to, errors in the detection of the boundaries lead to smaller positive effects, and even negative ones for some algorithms. This shows that even though infants could potentially use prosodic breaks, it does not necessarily follow that they should incorporate prosody into their segmentation strategies, when confronted with realistic signals.
Collapse
Affiliation(s)
- Bogdan Ludusan
- Laboratory for Language Development, RIKEN Center for Brain Science, Japan; Laboratoire de Sciences Cognitives et Psycholinguistique, ENS Paris Sciences Lettres, EHESS, CNRS, France.
| | - Alejandrina Cristia
- Laboratoire de Sciences Cognitives et Psycholinguistique, ENS Paris Sciences Lettres, EHESS, CNRS, France
| | - Reiko Mazuka
- Laboratory for Language Development, RIKEN Center for Brain Science, Japan; Department of Psychology and Neuroscience, Duke University, USA
| | - Emmanuel Dupoux
- Laboratoire de Sciences Cognitives et Psycholinguistique, ENS Paris Sciences Lettres, EHESS, CNRS, INRIA, France
| |
Collapse
|
18
|
Ramos-Escobar N, Segura E, Olivé G, Rodriguez-Fornells A, François C. Oscillatory activity and EEG phase synchrony of concurrent word segmentation and meaning-mapping in 9-year-old children. Dev Cogn Neurosci 2021; 51:101010. [PMID: 34461393 PMCID: PMC8403737 DOI: 10.1016/j.dcn.2021.101010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 08/25/2021] [Accepted: 08/26/2021] [Indexed: 10/28/2022] Open
Abstract
When learning a new language, one must segment words from continuous speech and associate them with meanings. These complex processes can be boosted by attentional mechanisms triggered by multi-sensory information. Previous electrophysiological studies suggest that brain oscillations are sensitive to different hierarchical complexity levels of the input, making them a plausible neural substrate for speech parsing. Here, we investigated the functional role of brain oscillations during concurrent speech segmentation and meaning acquisition in sixty 9-year-old children. We collected EEG data during an audio-visual statistical learning task during which children were exposed to a learning condition with consistent word-picture associations and a random condition with inconsistent word-picture associations before being tested on their ability to recall words and word-picture associations. We capitalized on the brain dynamics to align neural activity to the same rate as an external rhythmic stimulus to explore modulations of neural synchronization and phase synchronization between electrodes during multi-sensory word learning. Results showed enhanced power at both word- and syllabic-rate and increased EEG phase synchronization between frontal and occipital regions in the learning compared to the random condition. These findings suggest that multi-sensory cueing and attentional mechanisms play an essential role in children's successful word learning.
Collapse
Affiliation(s)
- Neus Ramos-Escobar
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain
| | - Emma Segura
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain
| | - Guillem Olivé
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain
| | - Antoni Rodriguez-Fornells
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain.
| | | |
Collapse
|
19
|
Ortiz Barajas MC, Gervain J. The Role of Prenatal Experience and Basic Auditory Mechanisms in the Development of Language. MINNESOTA SYMPOSIA ON CHILD PSYCHOLOGY 2021. [DOI: 10.1002/9781119684527.ch4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
20
|
Prosody facilitates learning the word order in a new language. Cognition 2021; 213:104686. [PMID: 33863550 DOI: 10.1016/j.cognition.2021.104686] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 11/20/2022]
Abstract
One of the prominent ideas developed by Jacques Mehler and his colleagues was that perceptual tuning, present from birth on, enables infants, and language learners in general, to extract regularities from speech input. Here we discuss language learners'' ability to extract basic word order (VO or OV) structure from prosodic regularities in a language. The two are closely related: in phonological phrases of VO languages, the most prominent word is the rightmost one, and in OV languages, it is the leftmost one. In speech, this prominence is realized as extended duration, or as elevated pitch, sometimes combined with changes in intensity. When learning the first (L1) or the second language (L2), exposure to relevant rhythmic structure elicits implicit learning about syntactic structure, including the basic word order. However, it remains unclear whether triggering the learning process requires a certain level of familiarity with the relevant rhythm. It is moreover unknown whether prosodic information can help L2 learners to extract and learn the vocabulary of a new language. We tested Spanish- and Italian-speaking adults' ability to learn words from an artificial language with either non-native OV or native VO word order. The results show that learners used prosodic information to identify the most prominent words in short utterances when the artificial language was similar to the native language, with duration-based prominence in prosody and a VO word order. In contrast, when the artificial language had a non-native prominence marked by pitch alternations and an OV word order, prominent words were learned only after a three-day exposure to the relevant rhythmic structure. Thus, for adult L2 learners, only repeated exposure to the relevant prosody elicited learning new words from an unknown language with non-native prosodic marking, indicating that, with familiarity, prosodic cues can facilitate learning in L2.
Collapse
|
21
|
Endress AD, Johnson SP. When forgetting fosters learning: A neural network model for statistical learning. Cognition 2021; 213:104621. [PMID: 33608130 DOI: 10.1016/j.cognition.2021.104621] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 12/19/2020] [Accepted: 01/28/2021] [Indexed: 11/28/2022]
Abstract
Learning often requires splitting continuous signals into recurring units, such as the discrete words constituting fluent speech; these units then need to be encoded in memory. A prominent candidate mechanism involves statistical learning of co-occurrence statistics like transitional probabilities (TPs), reflecting the idea that items from the same unit (e.g., syllables within a word) predict each other better than items from different units. TP computations are surprisingly flexible and sophisticated. Humans are sensitive to forward and backward TPs, compute TPs between adjacent items and longer-distance items, and even recognize TPs in novel units. We explain these hallmarks of statistical learning with a simple model with tunable, Hebbian excitatory connections and inhibitory interactions controlling the overall activation. With weak forgetting, activations are long-lasting, yielding associations among all items; with strong forgetting, no associations ensue as activations do not outlast stimuli; with intermediate forgetting, the network reproduces the hallmarks above. Forgetting thus is a key determinant of these sophisticated learning abilities. Further, in line with earlier dissociations between statistical learning and memory encoding, our model reproduces the hallmarks of statistical learning in the absence of a memory store in which items could be placed.
Collapse
|
22
|
Kolberg L, de Carvalho A, Babineau M, Havron N, Fiévet AC, Abaurre B, Christophe A. "The tiger is hitting! the duck too!" 3-year-olds can use prosodic information to constrain their interpretation of ellipsis. Cognition 2021; 213:104626. [PMID: 33593594 DOI: 10.1016/j.cognition.2021.104626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 01/07/2021] [Accepted: 02/04/2021] [Indexed: 11/19/2022]
Abstract
This work aims to investigate French children's ability to use phrasal boundaries for disambiguation of a type of ambiguity not yet studied, namely stripping sentences versus simple transitive sentences. We used stripping sentences such as "[Le tigre tape]! [Le canard aussi]!" ("[The tiger is hitting]! [The duck too]!", in which both the tiger and the duck are hitting), which, without the prosodic information, would be ambiguous with a transitive sentence such as "[Le tigre] [tape le canard aussi]!" ("[The tiger] [is hitting the duck too]!", in which the tiger is hitting the duck). We presented 3-to-4-year-olds and 28-month-olds with one of the two types of sentence above, while they watched two videos side-by-side on a screen: one depicting the transitive interpretation of the sentences, and another depicting the stripping interpretation. The stripping interpretation video showed the two characters as agents of the named action (e.g. a duck and a tiger hitting a bunny), and the transitive interpretation video showed only the first character as an agent, and the second character as a patient of the action (e.g. the tiger hitting the duck and the bunny). The results showed that 3-to-4-year-olds use prosodic information to correctly distinguish stripping sentences from transitive sentences, as they looked significantly more at the appropriate video, while 28-month-olds show only a trend in the same direction. While recent studies demonstrated that from 18 months of age, infants are able to use phrasal prosody to guide the syntactic analysis of ambiguous sentences, our results show that only 3-to-4-year-olds were able to reliably use phrasal prosody to constrain the parsing of stripping sentences. We discuss several factors that can explain this delay, such as differences in the frequency of these structures in child-directed speech, as well as in the complexity of the sentences and of the experimental task. Our findings add to the growing body of evidence on the role of prosody in constraining parsing in young children.
Collapse
Affiliation(s)
- Letícia Kolberg
- Departamento de Linguística, Universidade Estadual de Campinas, Campinas, Brazil.
| | - Alex de Carvalho
- Laboratoire de Psychologie du Développement et de l'Éducation de l'Enfant (LaPsyDÉ), Department of Psychology, UniversitÉ de Paris, CNRS, F-75005 Paris, France
| | - Mireille Babineau
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure, PSL University, Paris, France; Maternité Port-Royal, AP-HP, Université Paris Descartes, France
| | - Naomi Havron
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure, PSL University, Paris, France; University of Haifa, Haifa, Israel
| | - Anne-Caroline Fiévet
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure, PSL University, Paris, France; Maternité Port-Royal, AP-HP, Université Paris Descartes, France
| | - Bernadete Abaurre
- Departamento de Linguística, Universidade Estadual de Campinas, Campinas, Brazil
| | - Anne Christophe
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure, PSL University, Paris, France; Maternité Port-Royal, AP-HP, Université Paris Descartes, France
| |
Collapse
|
23
|
Aberrant auditory system and its developmental implications for autism. SCIENCE CHINA-LIFE SCIENCES 2021; 64:861-878. [DOI: 10.1007/s11427-020-1863-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 12/06/2020] [Indexed: 12/26/2022]
|
24
|
|
25
|
Cosper SH, Männel C, Mueller JL. In the absence of visual input: Electrophysiological evidence of infants' mapping of labels onto auditory objects. Dev Cogn Neurosci 2020; 45:100821. [PMID: 32658761 PMCID: PMC7358178 DOI: 10.1016/j.dcn.2020.100821] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 05/13/2020] [Accepted: 06/29/2020] [Indexed: 11/25/2022] Open
Abstract
Despite the prominence of non-visual semantic features for some words (e.g., siren or thunder), little is known about when and how the meanings of those words that refer to auditory objects can be acquired in early infancy. With associative learning being an important mechanism of word learning, we ask the question whether associations between sounds and words lead to similar learning effects as associations between visual objects and words. In an event-related potential (ERP) study, 10- to 12-month-old infants were presented with pairs of environmental sounds and pseudowords in either a consistent (where sound-word mapping can occur) or inconsistent manner. Subsequently, the infants were presented with sound-pseudoword combinations either matching or violating the consistent pairs from the training phase. In the training phase, we observed word-form familiarity effects and pairing consistency effects for ERPs time-locked to the onset of the word. The test phase revealed N400-like effects for violated pairs as compared to matching pairs. These results indicate that associative word learning is also possible for auditory objects before infants' first birthday. The specific temporal occurrence of the N400-like effect and topological distribution of the ERPs suggests that the object's modality has an impact on how novel words are processed.
Collapse
Affiliation(s)
- Samuel H Cosper
- Institute of Cognitive Science, University of Osnabrück, Germany.
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Department of Audiology and Phoniatrics, Charité-Universitätsmedizin Berlin, Germany.
| | - Jutta L Mueller
- Institute of Cognitive Science, University of Osnabrück, Germany; Department of Linguistics, University of Vienna, Austria.
| |
Collapse
|
26
|
|
27
|
Bergelson E. The Comprehension Boost in Early Word Learning: Older Infants Are Better Learners. CHILD DEVELOPMENT PERSPECTIVES 2020; 14:142-149. [PMID: 33569084 PMCID: PMC7872330 DOI: 10.1111/cdep.12373] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Recent research has revealed that infants begin understanding words at around 6 months. After that, infants' comprehension vocabulary increases gradually in a linear way over 8-18 months, according to data from parental checklists. In contrast, infants' word comprehension improves robustly, qualitatively, and in a nonlinear way just after their first birthday, according to data from studies on spoken word comprehension. In this review, I integrate observational and experimental data to explain these divergent results. I argue that infants' comprehension boost is not well-explained by changes in their language input for common words, but rather by proposing that they learn to take better advantage of relatively stable input data. Next, I propose potentially complementary theoretical accounts of what makes older infants better learners. Finally, I suggest how the research community can expand our empirical base in this understudied area, and why doing so will inform our knowledge about child development.
Collapse
|
28
|
Frota S, Butler J, Uysal E, Severino C, Vigário M. European Portuguese-Learning Infants Look Longer at Iambic Stress: New Data on Language Specificity in Early Stress Perception. Front Psychol 2020; 11:1890. [PMID: 32982825 PMCID: PMC7484472 DOI: 10.3389/fpsyg.2020.01890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 07/09/2020] [Indexed: 12/02/2022] Open
Abstract
The ability to perceive lexical stress patterns has been shown to develop in language-specific ways. However, previous studies have examined this ability in languages that are either clearly stress-based (favoring the development of a preference for trochaic stress, like English and German) or syllable-based (favoring the development of no stress preferences, like French, Spanish, and Catalan) and/or where the frequency distributions of stress patterns provide clear data for a predominant pattern (like English and Hebrew). European Portuguese (EP) is a different type of language, which presents conflicting sets of cues related to rhythm, frequency, and stress correlates that challenge existing accounts of early stress perception. Using an anticipatory eye movement (AEM) paradigm implemented with eye-tracking, EP-learning infants at 5-6 months demonstrated sensitivity to the trochaic/iambic stress contrast, with evidence of asymmetrical perception or preference for iambic stress. These results are not predicted by the rhythmic account of developing stress perception, and suggest that the language-particular phonological patterns impacting the frequency of trochaic and iambic stress, beyond lexical words with two or more syllables, together with the prosodic correlates of stress, drive the early acquisition of lexical stress. Our findings provide the first evidence of sensitivity to stress patterns in the presence of segmental variability by 5-6 months, and highlight the importance of testing developing stress perception in languages with diverse combinations of rhythmic, phonological, and phonetic properties.
Collapse
Affiliation(s)
- Sónia Frota
- Center of Linguistics, School of Arts and Humanities, University of Lisbon, Lisbon, Portugal
| | - Joseph Butler
- Research and Enterprise Development, University of Bristol, Bristol, United Kingdom
| | - Ertugrul Uysal
- Faculté des Sciences Économiques, Université de Neuchâtel, Neuchâtel, Switzerland
| | - Cátia Severino
- Center of Linguistics, School of Arts and Humanities, University of Lisbon, Lisbon, Portugal
| | - Marina Vigário
- Center of Linguistics, School of Arts and Humanities, University of Lisbon, Lisbon, Portugal
| |
Collapse
|
29
|
Hahn LE, Benders T, Snijders TM, Fikkert P. Six-month-old infants recognize phrases in song and speech. INFANCY 2020; 25:699-718. [PMID: 32794372 DOI: 10.1111/infa.12357] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 06/09/2020] [Accepted: 07/02/2020] [Indexed: 11/29/2022]
Abstract
Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well-attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six-month-old Dutch infants (n = 80) were tested in the song or speech modality in the head-turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well-formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well-formed sequence, but only in a more fine-grained analysis. The preference for well-formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.
Collapse
Affiliation(s)
- Laura E Hahn
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands.,International Max Planck Research School for Language Sciences, Nijmegen, The Netherlands
| | - Titia Benders
- Department of Linguistics, Macquarie University, Sydney, NSW, Australia
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
| | - Paula Fikkert
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
30
|
Havron N, Babineau M, Christophe A. 18-month-olds fail to use recent experience to infer the syntactic category of novel words. Dev Sci 2020; 24:e13030. [PMID: 32783246 DOI: 10.1111/desc.13030] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 07/17/2020] [Accepted: 08/01/2020] [Indexed: 11/30/2022]
Abstract
Infants are able to use the contexts in which familiar words appear to guide their inferences about the syntactic category of novel words (e.g. 'This is a' + 'dax' -> dax = object). The current study examined whether 18-month-old infants can rapidly adapt these expectations by tracking the distribution of syntactic structures in their input. In French, la petite can be followed by both nouns (la petite balle, 'the little ball') and verbs (la petite mange, 'the little one is eating'). Infants were habituated to a novel word, as well as to familiar nouns or verbs (depending on the experimental group), all appearing after la petite. The familiar words served to create an expectation that la petite would be followed by either nouns or verbs. If infants can utilize their knowledge of a few frequent words to adjust their expectations, then they could use this information to infer the syntactic category of a novel word - and be surprised when the novel word is used in a context that is incongruent with their expectations. However, infants in both groups did not show a difference between noun and verb test trials. Thus, no evidence for adaptation-based learning was found. We propose that infants have to entertain strong expectations about syntactic contexts before they can adapt these expectations based on recent input.
Collapse
Affiliation(s)
- Naomi Havron
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure - PSL University, Paris, France
| | - Mireille Babineau
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure - PSL University, Paris, France
| | - Anne Christophe
- Laboratoire de Sciences Cognitives et Psycholinguistique, DEC-ENS/EHESS/CNRS, Ecole Normale Supérieure - PSL University, Paris, France
| |
Collapse
|
31
|
Endress AD, Slone LK, Johnson SP. Statistical learning and memory. Cognition 2020; 204:104346. [PMID: 32615468 DOI: 10.1016/j.cognition.2020.104346] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 05/12/2020] [Accepted: 05/30/2020] [Indexed: 01/01/2023]
Abstract
Learners often need to identify and remember recurring units in continuous sequences, but the underlying mechanisms are debated. A particularly prominent candidate mechanism relies on distributional statistics such as Transitional Probabilities (TPs). However, it is unclear what the outputs of statistical segmentation mechanisms are, and if learners store these outputs as discrete chunks in memory. We critically review the evidence for the possibility that statistically coherent items are stored in memory and outline difficulties in interpreting past research. We use Slone and Johnson's (2018) experiments as a case study to show that it is difficult to delineate the different mechanisms learners might use to solve a learning problem. Slone and Johnson (2018) reported that 8-month-old infants learned coherent chunks of shapes in visual sequences. Here, we describe an alternate interpretation of their findings based on a multiple-cue integration perspective. First, when multiple cues to statistical structure were available, infants' looking behavior seemed to track with the strength of the strongest one - backward TPs, suggesting that infants process multiple cues simultaneously and select the strongest one. Second, like adults, infants are exquisitely sensitive to chunks, but may require multiple cues to extract them. In Slone and Johnson's (2018) experiments, these cues were provided by immediate chunk repetitions during familiarization. Accordingly, infants showed strongest evidence of chunking following familiarization sequences in which immediate repetitions were more frequent. These interpretations provide a strong argument for infants' processing of multiple cues and the potential importance of multiple cues for chunk recognition in infancy.
Collapse
Affiliation(s)
- Ansgar D Endress
- Department of Psychology, City, University of London, United Kingdom.
| | - Lauren K Slone
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, United States; Department of Psychology, Hope College, Holland, United States
| | - Scott P Johnson
- Department of Psychology, University of California, Los Angeles, United States
| |
Collapse
|
32
|
Tsuji S, Jincho N, Mazuka R, Cristia A. Communicative cues in the absence of a human interaction partner enhance 12-month-old infants’ word learning. J Exp Child Psychol 2020; 191:104740. [DOI: 10.1016/j.jecp.2019.104740] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 10/25/2019] [Accepted: 10/25/2019] [Indexed: 10/25/2022]
|
33
|
Right Structural and Functional Reorganization in Four-Year-Old Children with Perinatal Arterial Ischemic Stroke Predict Language Production. eNeuro 2019; 6:ENEURO.0447-18.2019. [PMID: 31383726 PMCID: PMC6749144 DOI: 10.1523/eneuro.0447-18.2019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 04/24/2019] [Accepted: 06/03/2019] [Indexed: 11/21/2022] Open
Abstract
Brain imaging methods have contributed to shed light on the mechanisms of recovery after early brain insult. The assumption that the unaffected right hemisphere can take over language functions after left perinatal stroke is still under debate. Here, we report how patterns of brain structural and functional reorganization were associated with language outcomes in a group of four-year-old children with left perinatal arterial ischemic stroke (PAIS). Specifically, we gathered specific fine-grained developmental measures of receptive and productive aspects of language as well as standardized measures of cognitive development. We also collected structural neuroimaging data as well as functional activations during a passive listening story-telling fMRI task and a resting state session (rs-fMRI). Children with a left perinatal stroke showed larger lateralization indices of both structural and functional connectivity of the dorsal language pathway towards the right hemisphere that, in turn, were associated with better language outcomes. Importantly, the pattern of structural asymmetry was significantly more right-lateralized in children with a left perinatal brain insult than in a group of matched healthy controls. These results strongly suggest that early lesions of the left dorsal pathway and the associated perisylvian regions can induce the interhemispheric transfer of language functions to right homolog regions. This study provides combined evidence of structural and functional brain reorganization of language networks after early stroke with strong implications for neurobiological models of language development.
Collapse
|
34
|
Breen E, Pomper R, Saffran J. Phonological Learning Influences Label-Object Mapping in Toddlers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1923-1932. [PMID: 31170356 PMCID: PMC6808367 DOI: 10.1044/2019_jslhr-l-18-0131] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 09/17/2018] [Accepted: 02/12/2019] [Indexed: 06/09/2023]
Abstract
Purpose Infants rapidly acquire the sound patterns that characterize their native language. Knowledge of native language phonological cues facilitates learning new words that are consistent with these patterns. However, little is known about how newly acquired phonological knowledge-regularities that children are in the process of learning-affects novel word learning. The current experiment was designed to determine whether exposure to a novel phonological pattern affects subsequent novel word learning. Method Two-year-olds ( n = 41) were familiarized with a list of novel words that followed a simple phonotactic regularity. Following familiarization, toddlers were taught 4 novel label-object pairs. Two of the labels were consistent with the novel regularity, and 2 of the labels were inconsistent with the regularity. Results Toddlers with smaller vocabularies learned all of the novel label-object pairings, whereas toddlers with larger vocabularies only learned novel label-object pairings that were consistent with the novel phonological regularity. Conclusion These findings demonstrate that newly learned phonological patterns influence novel word learning and highlight the role of individual differences in toddlers' representations of candidate word forms.
Collapse
Affiliation(s)
- Ellen Breen
- Department of Psychology, Waisman Center, University of Wisconsin-Madison
| | - Ron Pomper
- Department of Psychology, Waisman Center, University of Wisconsin-Madison
| | - Jenny Saffran
- Department of Psychology, Waisman Center, University of Wisconsin-Madison
| |
Collapse
|
35
|
Mugitani R, Kobayashi T, Hayashi A, Fais L. The Use of Pitch Accent in Word-Object Association by Monolingual Japanese Infants. INFANCY 2019; 24:318-337. [PMID: 32677192 DOI: 10.1111/infa.12279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 10/26/2018] [Accepted: 12/02/2018] [Indexed: 11/29/2022]
Abstract
This study investigated the lexical use of Japanese pitch accent in Japanese-learning infants. A word-object association task revealed that 18-month-old infants succeeded in learning the associations between two nonsense objects paired with two nonsense words minimally distinguished by pitch pattern (Experiment 1). In contrast, 14-month-old infants failed (Experiment 2). Eighteen-month-old infants succeeded even for sounds that contained only the prosodic information (Experiment 3). However, a subsequent experiment revealed that 14-month-old infants succeeded in an easier single word-object task using pitch contrast (Experiment 4). These findings indicate that pitch pattern information is robustly available to 18-month-old Japanese monolingual infants in a minimal pair word-learning situation, but only partially accessible in the same context for 14-month-old infants.
Collapse
Affiliation(s)
| | | | - Akiko Hayashi
- Center for the Research and Support of Educational Practice, Tokyo Gakugei University
| | - Laurel Fais
- Department of Psychology, University of British Columbia
| |
Collapse
|
36
|
Fló A, Brusini P, Macagno F, Nespor M, Mehler J, Ferry AL. Newborns are sensitive to multiple cues for word segmentation in continuous speech. Dev Sci 2019; 22:e12802. [PMID: 30681763 DOI: 10.1111/desc.12802] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 01/19/2019] [Accepted: 01/21/2019] [Indexed: 11/30/2022]
Abstract
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364(1536), 3617-3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109(9), 3253-3258, 2012; Jusczyk & Aslin, Cogn Psychol 29, 1-23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co-occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near-infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.
Collapse
Affiliation(s)
- Ana Fló
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy.,Cognitive Neuroimaging Unit, Commissariat à l'Energie Atomique (CEA), Institut National de la Santé et de la Recherche Médicale (INSERM) U992, NeuroSpin Center, Gif-sur-Yvette, France
| | - Perrine Brusini
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy.,Institute of Psychology Health and Society, University of Liverpool, Liverpool, UK
| | - Francesco Macagno
- Neonatology Unit, Azienda Ospedaliera Santa Maria della Misericordia, Udine, Italy
| | - Marina Nespor
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy
| | - Jacques Mehler
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy
| | - Alissa L Ferry
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy.,Division of Human Communication, Hearing, and Development, University of Manchester, Manchester, UK
| |
Collapse
|
37
|
Auditory sequence perception in common marmosets (Callithrix jacchus). Behav Processes 2019; 162:55-63. [PMID: 30716383 DOI: 10.1016/j.beproc.2019.01.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 12/26/2018] [Accepted: 01/31/2019] [Indexed: 11/20/2022]
Abstract
One of the essential linguistic and musical faculties of humans is the ability to recognize the structure of sound configurations and to extract words and melodies from continuous sound sequences. However, monkeys' ability to process the temporal structure of sounds is controversial. Here, to investigate whether monkeys can analyze the temporal structure of auditory patterns, two common marmosets were trained to discriminate auditory patterns in three experiments. In Experiment 1, the marmosets were able to discriminate trains of either 0.5- or 2-kHz tones repeated in either 50- or 200-ms intervals. However, the marmosets were not able to discriminate ABAB from AABB patterns consisting of A (0.5-kHz/50-ms pulse) and B (2-kHz/200-ms pulse) elements in Experiment 2, and A (0.5-kHz/50-ms pulse) and B (0.5-kHz/200-ms pulse) [or A (0.5-kHz/200-ms pulse) and B (2-kHz/200-ms pulse)] in Experiment 3. Consequently, the results indicated that the marmosets could not perceive tonal structures in terms of the temporal configuration of discrete sounds, whereas they could recognize the acoustic features of the stimuli. The present findings were supported by cognitive and brain studies that indicated a limited ability to process sound sequences. However, more studies are needed to confirm the ability of auditory sequence perception in common marmosets.
Collapse
|
38
|
de Carvalho A, He AX, Lidz J, Christophe A. Prosody and Function Words Cue the Acquisition of Word Meanings in 18-Month-Old Infants. Psychol Sci 2019; 30:319-332. [DOI: 10.1177/0956797618814131] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Language acquisition presents a formidable task for infants, for whom word learning is a crucial yet challenging step. Syntax (the rules for combining words into sentences) has been robustly shown to be a cue to word meaning. But how can infants access syntactic information when they are still acquiring the meanings of words? We investigated the contribution of two cues that may help infants break into the syntax and give a boost to their lexical acquisition: phrasal prosody (speech melody) and function words, both of which are accessible early in life and correlate with syntactic structure in the world’s languages. We show that 18-month-old infants use prosody and function words to recover sentences’ syntactic structure, which in turn constrains the possible meanings of novel words: Participants ( N = 48 in each of two experiments) interpreted a novel word as referring to either an object or an action, given its position within the prosodic-syntactic structure of sentences.
Collapse
Affiliation(s)
- Alex de Carvalho
- Département d’Études Cognitives, Laboratoire de Sciences Cognitives et Psycholinguistique, École des Hautes Études en Sciences Sociales, École Normale Supérieure, PSL Université Paris, Centre National de la Recherche Scientifique
- Maternité Port-Royal, Assistance Publique – Hôpitaux de Paris, Université Paris Descartes
- Department of Psychology, University of Pennsylvania
| | - Angela Xiaoxue He
- Department of Speech, Language & Hearing Sciences, Boston University
| | - Jeffrey Lidz
- Department of Linguistics, University of Maryland
| | - Anne Christophe
- Département d’Études Cognitives, Laboratoire de Sciences Cognitives et Psycholinguistique, École des Hautes Études en Sciences Sociales, École Normale Supérieure, PSL Université Paris, Centre National de la Recherche Scientifique
- Maternité Port-Royal, Assistance Publique – Hôpitaux de Paris, Université Paris Descartes
| |
Collapse
|
39
|
Lucca K, Wilbourn MP. The what and the how: Information-seeking pointing gestures facilitate learning labels and functions. J Exp Child Psychol 2018; 178:417-436. [PMID: 30318380 DOI: 10.1016/j.jecp.2018.08.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Revised: 08/07/2018] [Accepted: 08/08/2018] [Indexed: 11/25/2022]
Abstract
Infants' pointing gestures are clear and salient markers of their interest. As a result, they afford infants with a targeted and precise way of eliciting information from others. The current study investigated whether, similar to older children's question asking, infants' pointing gestures are produced to obtain information. Specifically, in a single experimental study, we examined whether 18-month-olds (N = 36) point to request specific types of information and how this translates into learning across domains. We elicited pointing from infants in a context that would naturally lend itself to information seeking (i.e., out-of-reach novel objects). In response to infants' points, an experimenter provided a label, a function, or no information for each pointed-to object. We assessed infants' persistence after receiving different types of information and their subsequent ability to form label-object or function-object associations. When infants pointed and received no information or functions, they persisted significantly more often than when they pointed and received labels, suggesting that they were most satisfied with receiving labels for objects compared with functions or no information. Infants successfully mapped both labels and functions onto objects. When infants expressed their interest in a novel object in a manner other than pointing, such as reaching, they (a) were equally satisfied with receiving object labels, functions, or no information and (b) did not successfully learn either labels or functions. Together, these findings demonstrate that infants' pointing gestures are specific requests for labels that facilitate the acquisition of various types of information. In doing so, this work connects the research on information seeking during infancy to the established literature on question asking during childhood.
Collapse
Affiliation(s)
- Kelsey Lucca
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA.
| | | |
Collapse
|
40
|
Chong AJ, Vicenik C, Sundara M. Intonation Plays a Role in Language Discrimination by Infants. INFANCY 2018. [DOI: 10.1111/infa.12257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Affiliation(s)
- Adam J. Chong
- Department of Linguistics Queen Mary University of London
- Department of Linguistics University of California Los Angeles
| | - Chad Vicenik
- Department of Linguistics University of California Los Angeles
| | - Megha Sundara
- Department of Linguistics University of California Los Angeles
| |
Collapse
|
41
|
Weatherhead D, White KS. And then I saw her race: Race-based expectations affect infants’ word processing. Cognition 2018; 177:87-97. [DOI: 10.1016/j.cognition.2018.04.004] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 03/13/2018] [Accepted: 04/02/2018] [Indexed: 11/25/2022]
|
42
|
Mueller JL, Cate CT, Toro JM. A Comparative Perspective on the Role of Acoustic Cues in Detecting Language Structure. Top Cogn Sci 2018; 12:859-874. [PMID: 30033636 DOI: 10.1111/tops.12373] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 05/20/2018] [Accepted: 06/20/2018] [Indexed: 12/01/2022]
Abstract
Most human language learners acquire language primarily via the auditory modality. This is one reason why auditory artificial grammars play a prominent role in the investigation of the development and evolutionary roots of human syntax. The present position paper brings together findings from human and non-human research on the impact of auditory cues on learning about linguistic structures with a special focus on how different types of cues and biases in auditory cognition may contribute to success and failure in artificial grammar learning (AGL). The basis of our argument is the link between auditory cues and syntactic structure across languages and development. Cross-species comparison suggests that many aspects of auditory cognition that are relevant for language are not human specific and are present even in rather distantly related species. Furthermore, auditory cues and biases impact on learning, which we will discuss in the example of auditory perception and AGL studies. This observation, together with the significant role of auditory cues in language processing, supports the idea that auditory cues served as a bootstrap to syntax during language evolution. Yet this also means that potentially human-specific syntactic abilities are not due to basic auditory differences between humans and non-human animals but are based upon more advanced cognitive processes.
Collapse
Affiliation(s)
| | - Carel Ten Cate
- Institute of Biology, Leiden University.,Leiden Institute for Brain and Cognition
| | - Juan M Toro
- ICREA (Institució Catalana de Recerca I Estudis Avançats).,Center for Brain and Cognition, University Pompeu Fabra
| |
Collapse
|
43
|
Sundara M. Why do children pay more attention to grammatical morphemes at the ends of sentences? JOURNAL OF CHILD LANGUAGE 2018; 45:703-716. [PMID: 29067896 DOI: 10.1017/s0305000917000356] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Children pay more attention to the beginnings and ends of sentences rather than the middle. In natural speech, ends of sentences are prosodically and segmentally enhanced; they are also privileged by sensory and recall advantages. We contrasted whether acoustic enhancement or sensory and recall-related advantages are necessary and sufficient for the salience of grammatical morphemes at the ends of sentences. We measured 22-month-olds' listening times to grammatical and ungrammatical sentences with third person singular -s. Crucially, by cross-splicing the speech stimuli, acoustic enhancement and sensory and recall advantages were fully crossed. Only children presented with the verb in sentence-final position, a position with sensory and recall advantages, distinguished between the grammatical and ungrammatical sentences. Thus, sensory and recall advantages alone were necessary and sufficient to make grammatical morphemes at ends of sentences salient. These general processing constraints privilege ends of sentences over middles, regardless of the acoustic enhancement.
Collapse
Affiliation(s)
- Megha Sundara
- Department of Linguistics,University of California,Los Angeles
| |
Collapse
|
44
|
Liu L, Kager R. Monolingual and Bilingual Infants' Ability to Use Non-native Tone for Word Learning Deteriorates by the Second Year After Birth. Front Psychol 2018; 9:117. [PMID: 29599730 PMCID: PMC5862817 DOI: 10.3389/fpsyg.2018.00117] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Accepted: 01/24/2018] [Indexed: 11/13/2022] Open
Abstract
Previous studies reported a non-native word learning advantage for bilingual infants at around 18 months. We investigated developmental changes in infant interpretation of sounds that aid in object mapping. Dutch monolingual and bilingual (exposed to Dutch and a second non-tone-language) infants' word learning ability was examined on two novel label-object pairings using syllables differing in Mandarin tones as labels (flat vs. falling). Infants aged 14-15 months, regardless of language backgrounds, were sensitive to violations in the label-objects pairings when lexical tones were switched compared to when they were the same as habituated. Conversely at 17-18 months, neither monolingual nor bilingual infants demonstrated learning. Linking with existing literature, infants' ability to associate non-native tones with meanings may be related to tonal acoustic properties and/or perceptual assimilation to native prosodic categories. These findings provide new insights into the relation between infant tone perception, learning, and interpretative narrowing from a developmental perspective.
Collapse
Affiliation(s)
- Liquan Liu
- School of Social Sciences and Psychology, Western Sydney University, Sydney, NSW, Australia
- Utrecht Institute of Linguistics-OTS, Utrecht University, Utrecht, Netherlands
- MARCS Institute for Brain, Behaviour & Development, Western Sydney University, Sydney, NSW, Australia
- Centre of Excellence for the Dynamics of Language, Australian Research Council, Canberra, ACT, Australia
| | - René Kager
- Utrecht Institute of Linguistics-OTS, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
45
|
Holzgrefe-Lang J, Wellmann C, Höhle B, Wartenburger I. Infants' Processing of Prosodic Cues: Electrophysiological Evidence for Boundary Perception beyond Pause Detection. LANGUAGE AND SPEECH 2018; 61:153-169. [PMID: 28937300 DOI: 10.1177/0023830917730590] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Infants as young as six months are sensitive to prosodic phrase boundaries marked by three acoustic cues: pitch change, final lengthening, and pause. Behavioral studies suggest that a language-specific weighting of these cues develops during the first year of life; recent work on German revealed that eight-month-olds, unlike six-month-olds, are capable of perceiving a prosodic boundary on the basis of pitch change and final lengthening only. The present study uses Event-Related Potentials (ERPs) to investigate the neuro-cognitive development of prosodic cue perception in German-learning infants. In adults' ERPs, prosodic boundary perception is clearly reflected by the so-called Closure Positive Shift (CPS). To date, there is mixed evidence on whether an infant CPS exists that signals early prosodic cue perception, or whether the CPS emerges only later-the latter implying that infantile brain responses to prosodic boundaries reflect acoustic, low-level pause detection. We presented six- and eight-month-olds with stimuli containing either no boundary cues, only a pitch cue, or a combination of both pitch change and final lengthening. For both age groups, responses to the former two conditions did not differ, while brain responses to prosodic boundaries cued by pitch change and final lengthening showed a positivity that we interpret as a CPS-like infant ERP component. This hints at an early sensitivity to prosodic boundaries that cannot exclusively be based on pause detection. Instead, infants' brain responses indicate an early ability to exploit subtle, relational prosodic cues in speech perception-presumably even earlier than could be concluded from previous behavioral results.
Collapse
|
46
|
Dupoux E. Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner. Cognition 2018; 173:43-59. [PMID: 29324240 DOI: 10.1016/j.cognition.2017.11.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 11/10/2017] [Accepted: 11/22/2017] [Indexed: 10/18/2022]
Abstract
Spectacular progress in the information processing sciences (machine learning, wearable sensors) promises to revolutionize the study of cognitive development. Here, we analyse the conditions under which 'reverse engineering' language development, i.e., building an effective system that mimics infant's achievements, can contribute to our scientific understanding of early language development. We argue that, on the computational side, it is important to move from toy problems to the full complexity of the learning situation, and take as input as faithful reconstructions of the sensory signals available to infants as possible. On the data side, accessible but privacy-preserving repositories of home data have to be setup. On the psycholinguistic side, specific tests have to be constructed to benchmark humans and machines at different linguistic levels. We discuss the feasibility of this approach and present an overview of current results.
Collapse
|
47
|
Abstract
To understand the type of neural computations that may explain how human infants acquire their native language in only a few months, the study of their neural architecture is necessary. The development of brain imaging techniques has opened the possibilities of studying human infants without discomfort, and although these studies are still sparse, several characteristics are noticeable in the human infant's brain: first, parallel and hierarchical processing pathways are observed before intense exposure to speech with an efficient temporal coding in the left hemisphere and, second, frontal regions are involved from the start in infants' cognition. These observations are certainly not sufficient to explain language acquisition but illustrate a new approach that relies on a better description of infants' brain activity during linguistic tasks, which is compared to results in animals and human adults to clarify the neural bases of language in humans.
Collapse
|
48
|
Räsänen O, Doyle G, Frank MC. Pre-linguistic segmentation of speech into syllable-like units. Cognition 2017; 171:130-150. [PMID: 29156241 DOI: 10.1016/j.cognition.2017.11.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Revised: 10/23/2017] [Accepted: 11/10/2017] [Indexed: 11/15/2022]
Abstract
Syllables are often considered to be central to infant and adult speech perception. Many theories and behavioral studies on early language acquisition are also based on syllable-level representations of spoken language. There is little clarity, however, on what sort of pre-linguistic "syllable" would actually be accessible to an infant with no phonological or lexical knowledge. Anchored by the notion that syllables are organized around particularly sonorous (audible) speech sounds, the present study investigates the feasibility of speech segmentation into syllable-like chunks without any a priori linguistic knowledge. We first operationalize sonority as a measurable property of the acoustic input, and then use sonority variation across time, or speech rhythm, as the basis for segmentation. The entire process from acoustic input to chunks of syllable-like acoustic segments is implemented as a computational model inspired by the oscillatory entrainment of the brain to speech rhythm. We analyze the output of the segmentation process in three different languages, showing that the sonority fluctuation in speech is highly informative of syllable and word boundaries in all three cases without any language-specific tuning of the model. These findings support the widely held assumption that syllable-like structure is accessible to infants even when they are only beginning to learn the properties of their native language.
Collapse
Affiliation(s)
- Okko Räsänen
- Department of Signal Processing and Acoustics, Aalto University, P.O. Box 12000, Aalto, Finland.
| | - Gabriel Doyle
- Department of Psychology, Stanford University, Stanford, CA 94305, United States
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA 94305, United States
| |
Collapse
|
49
|
Berdasco-Muñoz E, Nishibayashi LL, Baud O, Biran V, Nazzi T. Early Segmentation Abilities in Preterm Infants. INFANCY 2017. [DOI: 10.1111/infa.12217] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Affiliation(s)
- Elena Berdasco-Muñoz
- Laboratoire Psychologie de la Perception (UMR 8242) Université Paris Descartes and the Centre National de la Recherche Scientifique
| | - Léo-Lyuki Nishibayashi
- Laboratoire Psychologie de la Perception (UMR 8242) Université Paris Descartes and the Centre National de la Recherche Scientifique
| | | | | | - Thierry Nazzi
- Laboratoire Psychologie de la Perception (UMR 8242) Université Paris Descartes and the Centre National de la Recherche Scientifique
| |
Collapse
|
50
|
Teickner C, Becker ABC, Schild U, Friedrich CK. Functional Parallelism of Detailed and Rough Speech Processing at the End of Infancy. INFANCY 2017. [DOI: 10.1111/infa.12218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Claudia Teickner
- Department of Psychology; University of Tübingen
- Department of Psychology; University of Hamburg
| | | | | | - Claudia K. Friedrich
- Department of Psychology; University of Tübingen
- Department of Psychology; University of Hamburg
| |
Collapse
|