1
|
de la Cruz-Pavía I, Hegde M, Cabrera L, Nazzi T. Infants' abilities to segment word forms from spectrally degraded speech in the first year of life. Dev Sci 2024:e13533. [PMID: 38853379 DOI: 10.1111/desc.13533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 04/22/2024] [Accepted: 05/15/2024] [Indexed: 06/11/2024]
Abstract
Infants begin to segment word forms from fluent speech-a crucial task in lexical processing-between 4 and 7 months of age. Prior work has established that infants rely on a variety of cues available in the speech signal (i.e., prosodic, statistical, acoustic-segmental, and lexical) to accomplish this task. In two experiments with French-learning 6- and 10-month-olds, we use a psychoacoustic approach to examine if and how degradation of the two fundamental acoustic components extracted from speech by the auditory system, namely, temporal (both frequency and amplitude modulation) and spectral information, impact word form segmentation. Infants were familiarized with passages containing target words, in which frequency modulation (FM) information was replaced with pure tones using a vocoder, while amplitude modulation (AM) was preserved in either 8 or 16 spectral bands. Infants were then tested on their recognition of the target versus novel control words. While the 6-month-olds were unable to segment in either condition, the 10-month-olds succeeded, although only in the 16 spectral band condition. These findings suggest that 6-month-olds need FM temporal cues for speech segmentation while 10-month-olds do not, although they need the AM cues to be presented in enough spectral bands (i.e., 16). This developmental change observed in infants' sensitivity to spectrotemporal cues likely results from an increase in the range of available segmentation procedures, and/or shift from a vowel to a consonant bias in lexical processing between the two ages, as vowels are more affected by our acoustic manipulations. RESEARCH HIGHLIGHTS: Although segmenting speech into word forms is crucial for lexical acquisition, the acoustic information that infants' auditory system extracts to process continuous speech remains unknown. We examined infants' sensitivity to spectrotemporal cues in speech segmentation using vocoded speech, and revealed a developmental change between 6 and 10 months of age. We showed that FM information, that is, the fast temporal modulations of speech, is necessary for 6- but not 10-month-old infants to segment word forms. Moreover, reducing the number of spectral bands impacts 10-month-olds' segmentation abilities, who succeed when 16 bands are preserved, but fail with 8 bands.
Collapse
Affiliation(s)
- Irene de la Cruz-Pavía
- Faculty of Social and Human Sciences, Universidad de Deusto, Bilbao, Spain
- Basque Foundation for Science Ikerbasque, Bilbao, Spain
| | - Monica Hegde
- INCC UMR 8002, CNRS, F-75006, Université Paris Cité, Paris, France
| | | | - Thierry Nazzi
- INCC UMR 8002, CNRS, F-75006, Université Paris Cité, Paris, France
| |
Collapse
|
2
|
Berdasco-Muñoz E, Biran V, Nazzi T. Probing the Impact of Prematurity on Segmentation Abilities in the Context of Bilingualism. Brain Sci 2023; 13:brainsci13040568. [PMID: 37190533 DOI: 10.3390/brainsci13040568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/09/2023] [Accepted: 03/24/2023] [Indexed: 03/30/2023] Open
Abstract
Infants born prematurely are at a high risk of developing linguistic deficits. In the current study, we compare how full-term and healthy preterm infants without neuro-sensorial impairments segment words from fluent speech, an ability crucial for lexical acquisition. While early word segmentation abilities have been found in monolingual infants, we test here whether it is also the case for French-dominant bilingual infants with varying non-dominant languages. These bilingual infants were tested on their ability to segment monosyllabic French words from French sentences at 6 months of (postnatal) age, an age at which both full-term and preterm monolinguals are able to segment these words. Our results establish the existence of segmentation skills in these infants, with no significant difference in performance between the two maturation groups. Correlation analyses failed to find effects of gestational age in the preterm group, as well as effects of the language dominance within the bilingual groups. These findings indicate that monosyllabic word segmentation, which has been found to emerge by 4 months in monolingual French-learning infants, is a robust ability acquired at an early age even in the context of bilingualism and prematurity. Future studies should further probe segmentation abilities in more extreme conditions, such as in bilinguals tested in their non-dominant language, in preterm infants with medical issues, or testing the segmentation of more complex word structures.
Collapse
|
3
|
Peter V, van Ommen S, Kalashnikova M, Mazuka R, Nazzi T, Burnham D. Language specificity in cortical tracking of speech rhythm at the mora, syllable, and foot levels. Sci Rep 2022; 12:13477. [PMID: 35931787 PMCID: PMC9356059 DOI: 10.1038/s41598-022-17401-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 07/25/2022] [Indexed: 11/29/2022] Open
Abstract
Recent research shows that adults’ neural oscillations track the rhythm of the speech signal. However, the extent to which this tracking is driven by the acoustics of the signal, or by language-specific processing remains unknown. Here adult native listeners of three rhythmically different languages (English, French, Japanese) were compared on their cortical tracking of speech envelopes synthesized in their three native languages, which allowed for coding at each of the three language’s dominant rhythmic unit, respectively the foot (2.5 Hz), syllable (5 Hz), or mora (10 Hz) level. The three language groups were also tested with a sequence in a non-native language, Polish, and a non-speech vocoded equivalent, to investigate possible differential speech/nonspeech processing. The results first showed that cortical tracking was most prominent at 5 Hz (syllable rate) for all three groups, but the French listeners showed enhanced tracking at 5 Hz compared to the English and the Japanese groups. Second, across groups, there were no differences in responses for speech versus non-speech at 5 Hz (syllable rate), but there was better tracking for speech than for non-speech at 10 Hz (not the syllable rate). Together these results provide evidence for both language-general and language-specific influences on cortical tracking.
Collapse
Affiliation(s)
- Varghese Peter
- MARCS Institute for Brain Behaviour and Development, Western Sydney University, Penrith, NSW, Australia. .,School of Health and Behavioural Sciences, University of the Sunshine Coast, Sippy Downs, Australia.
| | - Sandrien van Ommen
- Integrative Neuroscience and Cognition Center, CNRS-Université Paris Cité, Paris, France.,Neurosciences Fondamentales, University of Geneva, Geneva, Switzerland
| | - Marina Kalashnikova
- MARCS Institute for Brain Behaviour and Development, Western Sydney University, Penrith, NSW, Australia.,BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Guipuzcoa, Spain.,IKERBASQUE, Basque Foundation for Science, Bilbao, Bizcaya, Spain
| | - Reiko Mazuka
- Laboratory for Language Development, RIKEN Center for Brain Science, Saitama, Japan.,Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Thierry Nazzi
- Integrative Neuroscience and Cognition Center, CNRS-Université Paris Cité, Paris, France
| | - Denis Burnham
- MARCS Institute for Brain Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
| |
Collapse
|
4
|
Hoareau M, Yeung HH, Nazzi T. Infants' statistical word segmentation in an artificial language is linked to both parental speech input and reported production abilities. Dev Sci 2019; 22:e12803. [PMID: 30681753 DOI: 10.1111/desc.12803] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 11/26/2018] [Accepted: 01/14/2019] [Indexed: 01/11/2023]
Abstract
Individual variability in infant's language processing is partly explained by environmental factors, like the quantity of parental speech input, as well as by infant-specific factors, like speech production. Here, we explore how these factors affect infant word segmentation. We used an artificial language to ensure that only statistical regularities (like transitional probabilities between syllables) could cue word boundaries, and then asked how the quantity of parental speech input and infants' babbling repertoire predict infants' abilities to use these statistical cues. We replicated prior reports showing that 8-month-old infants use statistical cues to segment words, with a preference for part-words over words (a novelty effect). Crucially, 8-month-olds with larger novelty effects had received more speech input at 4 months and had greater production abilities at 8 months. These findings establish for the first time that the ability to extract statistical information from speech correlates with individual factors in infancy, like early speech experience and language production. Implications of these findings for understanding individual variability in early language acquisition are discussed.
Collapse
Affiliation(s)
- Mélanie Hoareau
- Integrative Neuroscience and Cognition Center, Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Thierry Nazzi
- Integrative Neuroscience and Cognition Center, Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS (Integrative Neuroscience and Cognition Center, UMR 8002), Paris, France
| |
Collapse
|