1
|
Piot L, Nazzi T, Boll-Avetisyan N. Infants' sensitivity to phonotactic regularities related to perceptually low-salient fricatives: a cross-linguistic study. Front Psychol 2024; 15:1367240. [PMID: 38533216 PMCID: PMC10964922 DOI: 10.3389/fpsyg.2024.1367240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
Introduction Infants' sensitivity to language-specific phonotactic regularities emerges between 6- and 9- months of age, and this sensitivity has been shown to impact other early processes such as wordform segmentation and word learning. However, the acquisition of phonotactic regularities involving perceptually low-salient phonemes (i.e., phoneme contrasts that are hard to discriminate at an early age), has rarely been studied and prior results show mixed findings. Here, we aimed to further assess infants' acquisition of such regularities, by focusing on the low-salient contrast of /s/- and /ʃ/-initial consonant clusters. Methods Using the headturn preference procedure, we assessed whether French- and German-learning 9-month-old infants are sensitive to language-specific regularities varying in frequency within and between the two languages (i.e., /st/ and /sp/ frequent in French, but infrequent in German, /ʃt/ and /ʃp/ frequent in German, but infrequent in French). Results French-learning infants preferred the frequent over the infrequent phonotactic regularities, but the results for the German-learning infants were less clear. Discussion These results suggest crosslinguistic acquisition patterns, although an exploratory direct comparison of the French- and German-learning groups was inconclusive, possibly linked to low statistical power to detect such differences. Nevertheless, our findings suggest that infants' early phonotactic sensitivities extend to regularities involving perceptually low-salient phoneme contrasts at 9 months, and highlight the importance of conducting cross-linguistic research on such language-specific processes.
Collapse
Affiliation(s)
- Leonardo Piot
- Department of Linguistics, Cognitive Sciences, University of Potsdam, Potsdam, Germany
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, Paris, France
| | - Thierry Nazzi
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, Paris, France
| | | |
Collapse
|
2
|
Perez-Cortes S, Giancaspro D. (In)frequently asked questions: On types of frequency and their role(s) in heritage language variability. Front Psychol 2022; 13:1002978. [PMID: 36507032 PMCID: PMC9728047 DOI: 10.3389/fpsyg.2022.1002978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 10/14/2022] [Indexed: 11/24/2022] Open
Abstract
In recent years, researchers have become increasingly interested in exploring frequency as a source of variability in heritage speakers' (HSs) knowledge of their heritage language (HL). While many of these studies acknowledge that frequency can affect the shape of HL grammars, there is still no clear consensus about (a) what "frequency" means in the context of HL acquisition and (b) how to operationalize its multiple subtypes. In this paper, we provide a critical overview of frequency effects in HL research and their relevance for understanding patterns of inter/intra-speaker variability. To do so, we outline how prior research has defined, measured, and tested frequency, and present-as well as evaluate-novel methodological approaches and innovations recently implemented in the study of frequency effects, including a new analysis of how self-reported lexical frequency reliably predicts HSs' production of subjunctive mood in Spanish. Our aim is to highlight the immense potential of such work for addressing long-standing questions about HL grammars and to propose new lines of inquiry that will open up additional pathways for understanding HL variability.
Collapse
Affiliation(s)
- Silvia Perez-Cortes
- Department of World Languages and Cultures, Rutgers University–Camden, Camden, NJ, United States,*Correspondence: Silvia Perez-Cortes,
| | - David Giancaspro
- Department of Latin American, Latino and Iberian Studies, University of Richmond, Richmond, VA, United States
| |
Collapse
|
3
|
Ashokumar M, Guichet C, Schwartz JL, Ito T. Correlation between the effect of orofacial somatosensory inputs in speech perception and speech production performance. AUDITORY PERCEPTION & COGNITION 2022; 6:97-107. [PMID: 37260602 PMCID: PMC10229140 DOI: 10.1080/25742442.2022.2134674] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 09/20/2022] [Indexed: 06/02/2023]
Abstract
Introduction Orofacial somatosensory inputs modify the perception of speech sounds. Such auditory-somatosensory integration likely develops alongside speech production acquisition. We examined whether the somatosensory effect in speech perception varies depending on individual characteristics of speech production. Methods The somatosensory effect in speech perception was assessed by changes in category boundary between /e/ and /ø/ in a vowel identification test resulting from somatosensory stimulation providing facial skin deformation in the rearward direction corresponding to articulatory movement for /e/ applied together with the auditory input. Speech production performance was quantified by the acoustic distances between the average first, second and third formants of /e/ and /ø/ utterances recorded in a separate test. Results The category boundary between /e/ and /ø/ was significantly shifted towards /ø/ due to the somatosensory stimulation which is consistent with previous research. The amplitude of the category boundary shift was significantly correlated with the acoustic distance between the mean second - and marginally third - formants of /e/ and /ø/ productions, with no correlation with the first formant distance. Discussion Greater acoustic distances can be related to larger contrasts between the articulatory targets of vowels in speech production. These results suggest that the somatosensory effect in speech perception can be linked to speech production performance.
Collapse
Affiliation(s)
- Monica Ashokumar
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Clément Guichet
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Jean-Luc Schwartz
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, USA
| |
Collapse
|
4
|
Lorenzini I, Nazzi T. Early recognition of familiar word-forms as a function of production skills. Front Psychol 2022; 13:947245. [PMID: 36186391 PMCID: PMC9524451 DOI: 10.3389/fpsyg.2022.947245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 07/26/2022] [Indexed: 11/13/2022] Open
Abstract
Growing evidence shows that early speech processing relies on information extracted from speech production. In particular, production skills are linked to word-form processing, as more advanced producers prefer listening to pseudowords containing consonants they do not yet produce. However, it is unclear whether production affects word-form encoding (the translation of perceived phonological information into a memory trace) and/or recognition (the automatic retrieval of a stored item). Distinguishing recognition from encoding makes it possible to explore whether sensorimotor information is stored in long-term phonological representations (and thus, retrieved during recognition) or is processed when encoding a new item, but not necessarily when retrieving a stored item. In this study, we asked whether speech-related sensorimotor information is retained in long-term representations of word-forms. To this aim, we tested the effect of production on the recognition of ecologically learned, real familiar word-forms. Testing these items allowed to assess the effect of sensorimotor information in a context in which encoding did not happen during testing itself. Two groups of French-learning monolinguals (11- and 14-month-olds) participated in the study. Using the Headturn Preference Procedure, each group heard two lists, each containing 10 familiar word-forms composed of either early-learned consonants (commonly produced by French-learners at these ages) or late-learned consonants (more rarely produced at these ages). We hypothesized differences in listening preferences as a function of word-list and/or production skills. At both 11 and 14 months, babbling skills modulated orientation times to the word-lists containing late-learned consonants. This specific effect establishes that speech production impacts familiar word-form recognition by 11 months, suggesting that sensorimotor information is retained in long-term word-form representations and accessed during word-form processing.
Collapse
|
5
|
Cychosz M. LANGUAGE EXPOSURE PREDICTS CHILDREN'S PHONETIC PATTERNING: EVIDENCE FROM LANGUAGE SHIFT. LANGUAGE 2022; 98:461-509. [PMID: 37034148 PMCID: PMC10079255 DOI: 10.1353/lan.0.0269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Although understanding the role of the environment is central to language acquisition theory, rarely has this been studied for children's phonetic development, and receptive and expressive language experiences in the environment are not distinguished. This last distinction may be crucial for child speech production in particular, because production requires coordination of low-level speech-motor planning with high-level linguistic knowledge. In this study, the role of the environment is evaluated in a novel way-by studying phonetic development in a bilingual community undergoing rapid language shift. This sociolinguistic context provides a naturalistic gradient of the amount of children's exposure to two languages and the ratio of expressive to receptive experiences. A large-scale child language corpus encompassing over 500 hours of naturalistic South Bolivian Quechua and Spanish speech was efficiently annotated for children's and their caregivers' bilingual language use. These estimates were correlated with children's patterns in a series of speech production tasks. The role of the environment varied by outcome: children's expressive language experience best predicted their performance on a coarticulation-morphology measure, while their receptive experience predicted performance on a lower-level measure of vowel variability. Overall these bilingual exposure effects suggest a pathway for children's role in language change whereby language shift can result in different learning outcomes within a single speech community. Appropriate ways to model language exposure in development are discussed.
Collapse
|
6
|
Lozano I, López Pérez D, Laudańska Z, Malinowska‐Korczak A, Szmytke M, Radkowska A, Tomalski P. Changes in selective attention to articulating mouth across infancy: Sex differences and associations with language outcomes. INFANCY 2022; 27:1132-1153. [DOI: 10.1111/infa.12496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 05/27/2022] [Accepted: 07/15/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Itziar Lozano
- Department of Cognitive Psychology and Neurocognitive Science Faculty of Psychology, University of Warsaw Warsaw Poland
- Universidad Autónoma de Madrid, Faculty of Psychology Madrid Spain
| | - David López Pérez
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Zuzanna Laudańska
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Anna Malinowska‐Korczak
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Magdalena Szmytke
- Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw Warsaw Poland
| | - Alicja Radkowska
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
- Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw Warsaw Poland
| | - Przemysław Tomalski
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| |
Collapse
|
7
|
Polka L, Masapollo M, Ménard L. Setting the Stage for Speech Production: Infants Prefer Listening to Speech Sounds With Infant Vocal Resonances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:109-120. [PMID: 34889651 DOI: 10.1044/2021_jslhr-21-00412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Current models of speech development argue for an early link between speech production and perception in infants. Recent data show that young infants (at 4-6 months) preferentially attend to speech sounds (vowels) with infant vocal properties compared to those with adult vocal properties, suggesting the presence of special "memory banks" for one's own nascent speech-like productions. This study investigated whether the vocal resonances (formants) of the infant vocal tract are sufficient to elicit this preference and whether this perceptual bias changes with age and emerging vocal production skills. METHOD We selectively manipulated the fundamental frequency (f0 ) of vowels synthesized with formants specifying either an infant or adult vocal tract, and then tested the effects of those manipulations on the listening preferences of infants who were slightly older than those previously tested (at 6-8 months). RESULTS Unlike findings with younger infants (at 4-6 months), slightly older infants in Experiment 1 displayed a robust preference for vowels with infant formants over adult formants when f0 was matched. The strength of this preference was also positively correlated with age among infants between 4 and 8 months. In Experiment 2, this preference favoring infant over adult formants was maintained when f0 values were modulated. CONCLUSIONS Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.17131805.
Collapse
Affiliation(s)
- Linda Polka
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
- Center for Research on Brain, Language and Music, McGill University, Montréal, Québec, Canada
| | - Matthew Masapollo
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Lucie Ménard
- Center for Research on Brain, Language and Music, McGill University, Montréal, Québec, Canada
- Department of Linguistics, Université du Québec à Montréal, Canada
| |
Collapse
|
8
|
Keren-Portnoy T, Daffern H, DePaolis RA, Cox CMM, Brown KI, Oxley FAR, Kanaan M. "Did I just do that?"-Six-month-olds learn the contingency between their vocalizations and a visual reward in 5 minutes. INFANCY 2021; 26:1057-1075. [PMID: 34569704 PMCID: PMC8650573 DOI: 10.1111/infa.12433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Accepted: 08/23/2021] [Indexed: 11/25/2022]
Abstract
It has been shown that infants can increase or modify a motorically available behavior such as sucking, kicking, arm waving, etc., in response to a positive visual reinforcement (e.g., DeCasper & Fifer, 1980; Millar, 1990; Rochat & Striano, 1999; Rovee‐Collier, 1997; Watson & Ramey, 1972). We tested infants to determine if they would also change their vocal behavior in response to contingent feedback, which lacks the social, emotional, and auditory modeling typical of parent‐child interaction. Here, we show that in a single five‐minute session infants increase the rate of their vocalizations in order to control the appearance of colorful shapes on an iPad screen. This is the first experimental study to demonstrate that infants can rapidly learn to increase their vocalizations, when given positive reinforcement with no social element. This work sets the foundations for future studies into the causal relationship between the number of early vocalizations and the onset of words. In addition, there are potential clinical applications for reinforcing vocal practice in infant populations who are at risk for poor language skills.
Collapse
Affiliation(s)
| | - Helena Daffern
- AudioLab, Department of Electronic Engineering, University of York, York, UK
| | - Rory A DePaolis
- Department of Communication Sciences and Disorders, James Madison University, Harrisonburg, VA, USA
| | - Christopher M M Cox
- Department of Language and Linguistic Science, University of York, York, UK.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Ken I Brown
- Department of Music, University of York, York, UK
| | - Florence A R Oxley
- Department of Language and Linguistic Science, University of York, York, UK
| | - Mona Kanaan
- Department of Health Sciences, University of York, York, UK
| |
Collapse
|
9
|
Cox CMM, Keren-Portnoy T, Roepstorff A, Fusaroli R. A Bayesian meta-analysis of infants' ability to perceive audio-visual congruence for speech. INFANCY 2021; 27:67-96. [PMID: 34542230 DOI: 10.1111/infa.12436] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 11/29/2022]
Abstract
This paper quantifies the extent to which infants can perceive audio-visual congruence for speech information and assesses whether this ability changes with native language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio-visual congruence for speech. Moderator analyses, moreover, suggest that infants' audio-visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.
Collapse
Affiliation(s)
- Christopher Martin Mikkelsen Cox
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark.,Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Tamar Keren-Portnoy
- Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Andreas Roepstorff
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Riccardo Fusaroli
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| |
Collapse
|
10
|
Cychosz M, Munson B, Newman RS, Edwards JR. Auditory feedback experience in the development of phonetic production: Evidence from preschoolers with cochlear implants and their normal-hearing peers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2256. [PMID: 34598599 PMCID: PMC8487217 DOI: 10.1121/10.0005884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/30/2021] [Accepted: 07/26/2021] [Indexed: 06/13/2023]
Abstract
Previous work has found that preschoolers with greater phonological awareness and larger lexicons, who speak more throughout the day, exhibit less intra-syllabic coarticulation in controlled speech production tasks. These findings suggest that both linguistic experience and speech-motor control are important predictors of spoken phonetic development. Still, it remains unclear how preschoolers' speech practice when they talk drives the development of coarticulation because children who talk more are likely to have both increased fine motor control and increased auditory feedback experience. Here, the potential effect of auditory feedback is studied by examining a population-children with cochlear implants (CIs)-which is naturally differing in auditory experience. The results show that (1) developmentally appropriate coarticulation improves with an increased hearing age but not chronological age; (2) children with CIs pattern coarticulatorily closer to their younger, hearing age-matched peers than chronological age-matched peers; and (3) the effects of speech practice on coarticulation, measured using naturalistic, at-home recordings of the children's speech production, only appear in the children with CIs after several years of hearing experience. Together, these results indicate a strong role of auditory feedback experience on coarticulation and suggest that parent-child communicative exchanges could stimulate children's own vocal output, which drives speech development.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Benjamin Munson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis, Minnesota 55455, USA
| | - Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Jan R Edwards
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
11
|
López Assef B, Desmeules-Trudel F, Bernard A, Zamuner TS. A Shift in the Direction of the Production Effect in Children Aged 2-6 Years. Child Dev 2021; 92:2447-2464. [PMID: 34406649 DOI: 10.1111/cdev.13618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Research has found mixed evidence for the production effect in childhood. Some studies have found a positive effect of production on word recognition and recall, while others have found the reverse. This paper takes a developmental approach to investigate the production effect. Children aged 2-6 years (n = 150) from a predominantly white population in Ottawa, Canada were trained on familiar words which were either seen, heard or produced, followed by a recall task. Results showed a developmental shift: younger participants showed a reverse production effect, recalling more words that were heard during training, while older children showed the typical production effect, recalling more produced words. The effect of production on recall is not unidirectional and varies by age.
Collapse
|
12
|
Cychosz M, Munson B, Edwards JR. Practice and experience predict coarticulation in child speech. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2021; 17:366-396. [PMID: 34483779 PMCID: PMC8412131 DOI: 10.1080/15475441.2021.1890080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Much research in child speech development suggests that young children coarticulate more than adults. There are multiple, not mutually-exclusive, explanations for this pattern. For example, children may coarticulate more because they are limited by immature motor control. Or they may coarticulate more if they initially represent phonological segments in larger, more holistic units such as syllables or feet. We tested the importance of several different explanations for coarticulation in child speech by evaluating how four-year-olds' language experience, speech practice, and speech planning predicted their coarticulation between adjacent segments in real words and paired nonwords. Children with larger vocabularies coarticulated less, especially in real words, though there were no reliable coarticulatory differences between real words and nonwords after controlling for word duration. Children who vocalized more throughout a daylong audio recording also coarticulated less. Quantity of child vocalizations was more predictive of the degree of children's coarticulation than a measure of receptive language experience, adult word count. Overall, these results suggest strong roles for children's phonological representations and speech practice, as well as their immature fine motor control, for coarticulatory development.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Hearing and Speech Sciences, University of Maryland, College Park
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park
| | - Benjamin Munson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
| | - Jan R. Edwards
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| |
Collapse
|
13
|
Laing C, Bergelson E. From babble to words: Infants' early productions match words and objects in their environment. Cogn Psychol 2020; 122:101308. [PMID: 32504852 PMCID: PMC7572567 DOI: 10.1016/j.cogpsych.2020.101308] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 04/30/2020] [Accepted: 05/15/2020] [Indexed: 11/25/2022]
Abstract
Infants' early babbling allows them to engage in proto-conversations with caretakers, well before clearly articulated, meaningful words are part of their productive lexicon. Moreover, the well-rehearsed sounds from babble serve as a perceptual 'filter', drawing infants' attention towards words that match the sounds they can reliably produce. Using naturalistic home recordings of 44 10-11-month-olds (an age with high variability in early speech sound production), this study tests whether infants' early consonant productions match words and objects in their environment. We find that infants' babble matches the consonants produced in their caregivers' speech. Infants with a well-established consonant repertoire also match their babble to objects in their environment. Our findings show that infants' early consonant productions are shaped by their input: by 10 months, the sounds of babble match what infants see and hear.
Collapse
Affiliation(s)
- Catherine Laing
- Centre for Language and Communication Research, Cardiff University, UK; Department of Psychology and Neuroscience, Duke University, USA.
| | - Elika Bergelson
- Centre for Language and Communication Research, Cardiff University, UK; Department of Psychology and Neuroscience, Duke University, USA
| |
Collapse
|
14
|
Chen H, Lee DT, Luo Z, Lai RY, Cheung H, Nazzi T. Variation in phonological bias: Bias for vowels, rather than consonants or tones in lexical processing by Cantonese-learning toddlers. Cognition 2020; 213:104486. [PMID: 33077170 DOI: 10.1016/j.cognition.2020.104486] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 09/30/2020] [Accepted: 10/03/2020] [Indexed: 11/19/2022]
Abstract
Consonants and vowels have been considered to fulfill different functions in language processing, vowels being more important for prosodic and syntactic processes and consonants for lexically related processes (Nespor, Peña, & Mehler, 2003). This C-bias hypothesis in lexical processing is supported by studies with adults and infants in many languages such as English, French, Spanish, although a few studies, on Danish and Mandarin, suggest the existence of cross-linguistic variation. The present study explores whether a C-bias exists in a tone language with a complex tone system, Cantonese, by comparing the relative weight given to consonants, vowels, and also tones during word learning. To do so, looking behaviors of Cantonese-learning 20- and 30-month-olds (24 children per age/condition, 6 groups) were recorded by an eyetracker while they watched animated cartoons in Cantonese to learn pairs of novel words. The words differed minimally by either a consonant (e.g., /tœ6/ vs. /kœ6/), a vowel (e.g., /khim3/ vs. /khɛm3/), or a tone (e.g., T2 vs. T5). Analyses on proportional looking times revealed significant learning in 30-month-olds only, and at that age, only for the vowel contrasts. Growth curve analyses revealed better performance for the vowel condition compared to the other two conditions. The present findings establish a V-bias in Cantonese-learning 30-month-olds, adding new evidence from that tone language that the C-bias in lexical processing is not language-general. Implications for theoretical discussions on the origins of this phonological bias, and the impact of tones in early language acquisition, are discussed.
Collapse
Affiliation(s)
- Hui Chen
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Descartes, 45 rue des Saints-Pères, 75006 Paris, France.
| | - Daniel T Lee
- The Education University of Hong Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong
| | - Zili Luo
- The Education University of Hong Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong
| | - Regine Y Lai
- The Chinese University of Hong Kong, Department of Linguistics and Modern Languages, G/F, Leung Kau Kui Building, Shatin, N.T., Hong Kong
| | - Hintat Cheung
- The Education University of Hong Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong
| | - Thierry Nazzi
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Descartes, 45 rue des Saints-Pères, 75006 Paris, France.
| |
Collapse
|
15
|
Willadsen E, Persson C, Patrick K, Lohmander A, Oller DK. Assessment of prelinguistic vocalizations in real time: a comparison with phonetic transcription and assessment of inter-coder-reliability. CLINICAL LINGUISTICS & PHONETICS 2020; 34:593-616. [PMID: 31711312 DOI: 10.1080/02699206.2019.1681516] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 10/10/2019] [Accepted: 10/14/2019] [Indexed: 06/10/2023]
Abstract
This study investigated reliability of naturalistic listening in real time (NLRT) compared to phonetic transcription. Speech pathology students with brief training in NLRT assessed prelinguistic syllable inventory size and specific syllable types in typically developing infants. A second study also examined inter-coder reliability for canonical babbling, canonical babbling ratio and presence of oral stops in syllable inventory of infants with cleft palate, by means of NLRT. In study 1, ten students independently assessed prelinguistic samples of five 12-month-old typically developing infants using NLRT and phonetic transcription. Coders assessed syllable inventory size as more than twice as large using phonetic transcription as NLRT. Results showed a strong correlation between NLRT and phonetic transcription (syllables with more than five occurrences) for syllable inventory size (r = .60; p < .001). The methods showed similar results for inter-coder reliability of specific syllable types. In study 2, three other students assessed prelinguistic samples of twenty-eight 12-month-old infants with cleft palate by means of NLRT. Results revealed perfect inter-coder agreement for presence/absence of canonical babbling, strong correlations between the three coders' assessment of syllable inventory size (average r = .83; p < .001), but more inter-coder variability for agreement of specific syllable types. In conclusion, NLRT is a reliable method for assessing prelinguistic measures in infants with and without cleft palate with inter-coder agreement levels comparable to phonetic transcription for specific syllable types.
Collapse
|
16
|
Daffern H, Keren-Portnoy T, DePaolis RA, Brown KI. BabblePlay: An app for infants, controlled by infants, to improve early language outcomes. APPLIED ACOUSTICS. ACOUSTIQUE APPLIQUE. ANGEWANDTE AKUSTIK 2020; 162:107183. [PMID: 32362663 PMCID: PMC7043348 DOI: 10.1016/j.apacoust.2019.107183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 10/22/2019] [Accepted: 12/07/2019] [Indexed: 06/11/2023]
Abstract
This project set out to develop an app for infants under one year of age that responds in real time to language-like infant utterances with attractive images on an iPad screen. Language-like vocalisations were defined as voiced utterances which were not high pitched squeals, nor shouts. The app, BabblePlay, was intended for use in psycholinguistic research to investigate the possible causal relationship between early canonical babble and early onset of word production. It is also designed for a clinical setting, (1) to illustrate the importance of feedback as a way to encourage infant vocalisations, and (2) to provide consonant production practice for infant populations that do not vocalise enough or who vocalise in an atypical way, specifically, autistic infants (once they have begun to produce consonants). This paper describes the development and testing of BabblePlay, which responds to an infant's vocalisations with colourful moving shapes on the screen that are analogous to some features of the infant's vocalization including loudness and duration. Validation testing showed high correlation between the app and two human judges in identifying vocalisations in 200 min of BabblePlay recordings, and a feasibility study conducted with 60 infants indicates that they can learn the contingency between their vocalisations and the appearance of shapes on the screen in one five minute BabblePlay session. BabblePlay meets the specification of being a simple and easy-to-use app. It has been shown to be a promising tool for research on infant language development that could lead to its use in home and professional environments to demonstrate the importance of immediate reward for vocal utterances to increase vocalisations in infants.
Collapse
Affiliation(s)
- Helena Daffern
- AudioLab, Department of Electronic Engineering, University of York, United Kingdom
| | - Tamar Keren-Portnoy
- Department of Language and Linguistic Science, University of York, United Kingdom
| | - Rory A. DePaolis
- Communication Sciences and Disorders, James Madison University, United States
| | | |
Collapse
|
17
|
Jung J, Houston D. The Relationship Between the Onset of Canonical Syllables and Speech Perception Skills in Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:393-404. [PMID: 32073331 PMCID: PMC7210441 DOI: 10.1044/2019_jslhr-19-00158] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 10/15/2019] [Accepted: 10/20/2019] [Indexed: 06/10/2023]
Abstract
Purpose The study sought to determine whether the onset of canonical vocalizations in children with cochlear implants (CIs) is related to speech perception skills and spoken vocabulary size at 24 months postactivation. Method The vocal development in 13 young CI recipients (implanted by their third birthdays; mean age at activation = 20.62 months, SD = 8.92 months) was examined at every 3-month interval during the first 2 years of CI use. All children were enrolled in auditory-oral intervention programs. Families of these children used spoken English only. To determine the onset of canonical syllables, the first 50 utterances from 20-min adult-child interactions were analyzed during each session. The onset timing was determined when at least 20% of utterances included canonical syllables. As children's outcomes, we examined their Lexical Neighborhood Test scores and vocabulary size at 24 months postactivation. Results Pearson correlation analysis showed that the onset timing of canonical syllables is significantly correlated with phonemic recognition skills and spoken vocabulary size at 24 months postactivation. Regression analyses also indicated that the onset timing of canonical syllables predicted phonemic recognition skills and spoken vocabulary size at 24 months postactivation. Conclusion Monitoring vocal advancement during the earliest periods following cochlear implantation could be valuable as an early indicator of auditory-driven language development in young children with CIs. It remains to be studied which factors improve vocal development for young CI recipients.
Collapse
Affiliation(s)
- Jongmin Jung
- Department of Otolaryngology—Head & Neck Surgery, The Ohio State University, Columbus
| | - Derek Houston
- Department of Otolaryngology—Head & Neck Surgery, The Ohio State University, Columbus
- Nationwide Children's Hospital, Columbus, OH
| |
Collapse
|
18
|
Jørgensen LD, Willadsen E. Longitudinal study of the development of obstruent correctness from ages 3 to 5 years in 108 Danish children with unilateral cleft lip and palate: a sub-study within a multicentre randomized controlled trial. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2020; 55:121-135. [PMID: 31710176 DOI: 10.1111/1460-6984.12508] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 09/23/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND Speech-sound development in preschoolers with unilateral cleft lip and palate (UCLP) as a group is delayed/disordered, and obstruents comprise the most vulnerable sound class. AIMS To evaluate the development of obstruent correctness (PCC-obs) and error types (cleft speech characteristics (CSCs) and developmental speech characteristics (DSCs)) from ages 3-5 and to investigate possible predictors (error types, velopharyngeal dysfunction (VPD) and gender) of PCC-obs at age 5 in two groups of children with UCLP. METHODS & PROCEDURES Subgroup analysis was conducted within a multicentre randomized controlled trial (RCT) of primary surgery (Scandcleft Project). A total of 125 Danish children with UCLP received lip and soft palate repair around 4 months of age and early hard palate closure at 12 months (EHPC group) or late hard palate closure at 36 months (LHPC group). Audio and video recordings of a naming test were available for 108 children at ages 3 and 5, and recordings were transcribed phonetically by blinded raters. OUTCOMES & RESULTS PCC-obs scores increased significantly from ages 3-5 in both groups, but with small effect sizes in the EHPC group that had higher scores at age 3 than the LHPC group. DSCs decreased in both groups whereas CSCs only decreased in the LHPC group that had more CSCs at age 3 than the EHPC group. The frequency of CSCs at age 3 was a significant predictor of PCC-obs scores at age 5 in both groups. DSCs significantly improved the logistic regression model in the EHPC group, whereas VPD and gender did not significantly improve the model in either group. CONCLUSIONS & IMPLICATIONS Although PCC-obs developed significantly from ages 3 to 5, children with UCLP as a group did not catch up to typically developing Danish children at age 5. Furthermore, the LHPC group at age 5 did not reach the 3-year level of the EHPC group, which means that delaying hard palate closure until age 3 is detrimental to obstruent development. Both CSCs and DSCs at age 3 were important predictors of PCC-obs at age 5 and should be considered when determining need for intervention.
Collapse
Affiliation(s)
- Line Dahl Jørgensen
- University of Copenhagen, Department of Nordic Studies and Linguistics, Copenhagen, Denmark
| | - Elisabeth Willadsen
- University of Copenhagen, Department of Nordic Studies and Linguistics, Copenhagen, Denmark
| |
Collapse
|
19
|
Noiray A, Popescu A, Killmer H, Rubertus E, Krüger S, Hintermeier L. Spoken Language Development and the Challenge of Skill Integration. Front Psychol 2019; 10:2777. [PMID: 31920826 PMCID: PMC6938249 DOI: 10.3389/fpsyg.2019.02777] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 11/25/2019] [Indexed: 11/17/2022] Open
Abstract
The development of phonological awareness, the knowledge of the structural combinatoriality of a language, has been widely investigated in relation to reading (dis)ability across languages. However, the extent to which knowledge of phonemic units may interact with spoken language organization in (transparent) alphabetical languages has hardly been investigated. The present study examined whether phonemic awareness correlates with coarticulation degree, commonly used as a metric for estimating the size of children's production units. A speech production task was designed to test for developmental differences in intra-syllabic coarticulation degree in 41 German children from 4 to 7 years of age. The technique of ultrasound imaging allowed for comparing the articulatory foundations of children's coarticulatory patterns. Four behavioral tasks assessing various levels of phonological awareness from large to small units and expressive vocabulary were also administered. Generalized additive modeling revealed strong interactions between children's vocabulary and phonological awareness with coarticulatory patterns. Greater knowledge of sub-lexical units was associated with lower intra-syllabic coarticulation degree and greater differentiation of articulatory gestures for individual segments. This interaction was mostly nonlinear: an increase in children's phonological proficiency was not systematically associated with an equivalent change in coarticulation degree. Similar findings were drawn between vocabulary and coarticulatory patterns. Overall, results suggest that the process of developing spoken language fluency involves dynamical interactions between cognitive and speech motor domains. Arguments for an integrated-interactive approach to skill development are discussed.
Collapse
Affiliation(s)
- Aude Noiray
- Laboratory for Oral Language Acquisition, Linguistic Department, University of Potsdam, Potsdam, Germany
- Haskins Laboratories, New Haven, CT, United States
| | - Anisia Popescu
- Laboratory for Oral Language Acquisition, Linguistic Department, University of Potsdam, Potsdam, Germany
| | - Helene Killmer
- Department of Linguistics, University of Oslo, Oslo, Norway
| | - Elina Rubertus
- Laboratory for Oral Language Acquisition, Linguistic Department, University of Potsdam, Potsdam, Germany
| | - Stella Krüger
- Laboratory for Oral Language Acquisition, Linguistic Department, University of Potsdam, Potsdam, Germany
| | - Lisa Hintermeier
- Department of Education, Jyväskylä University, Jyväskylä, Finland
| |
Collapse
|
20
|
Sensorimotor influences on speech perception in pre-babbling infants: Replication and extension of Bruderer et al. (2015). Psychon Bull Rev 2019; 26:1388-1399. [PMID: 31037603 DOI: 10.3758/s13423-019-01601-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The relationship between speech perception and production is central to understanding language processing, yet remains under debate, particularly in early development. Recent research suggests that in infants aged 6 months, when the native phonological system is still being established, sensorimotor information from the articulators influences speech perception: The placement of a teething toy restricting tongue-tip movements interfered with infants' discrimination of a non-native contrast, /Da/-/da/, that involves tongue-tip movement. This effect was selective: A different teething toy that prevented lip closure but not tongue-tip movement did not disrupt discrimination. We conducted two sets of studies to replicate and extend these findings. Experiments 1 and 2 replicated the study by Bruderer et al. (Proceedings of the National Academy of Sciences of the United States of America, 112 (44), 13531-13536, 2015), but with synthesized auditory stimuli. Infants discriminated the non-native contrast (dental /da/ - retroflex /Da/) (Experiment 1), but showed no evidence of discrimination when the tongue-tip movement was prevented with a teething toy (Experiment 2). Experiments 3 and 4 extended this work to a native phonetic contrast (bilabial /ba/ - dental /da/). Infants discriminated the distinction with no teething toy present (Experiment 3), but when they were given a teething toy that interfered only with lip closure, a movement involved in the production of /ba/, discrimination was disrupted (Experiment 4). Importantly, this was the same teething toy that did not interfere with discrimination of /da/-/Da/ in Bruderer et al. (2015). These findings reveal specificity in the relation between sensorimotor and perceptual processes in pre-babbling infants, and show generalizability to a second phonetic contrast.
Collapse
|
21
|
Bernier DE, White KS. Toddlers Process Common and Infrequent Childhood Mispronunciations Differently for Child and Adult Speakers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4137-4149. [PMID: 31644384 DOI: 10.1044/2019_jslhr-h-18-0465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose This study examined toddlers' processing of mispronunciations based on their frequency of occurrence in child speech and the speaker who produced them. Method One hundred twenty 22-month-olds were assigned to 1 of 4 conditions. Using the intermodal preferential looking paradigm, toddlers were shown visual displays containing 1 familiar object and 1 novel object, labeled by either a child or an adult. Familiar objects were labeled correctly or with a small mispronunciation that is either common in child speech (e.g., waisin for raisin) or infrequent (e.g., rauter for water). Results A significant interaction of speaker and type of mispronunciation showed that, for the child speaker, toddlers treated common and infrequent mispronunciations similarly, with equivalently sized mispronunciation penalties relative to correctly pronounced labels. In contrast, for the adult speaker, toddlers showed a large penalty for common mispronunciations, but infrequent mispronunciations were treated equivalently to correct pronunciations. Conclusion These results both reinforce and extend previous work on toddlers' processing of mispronunciations by revealing a complex interplay of speaker, type of mispronunciation, and specific contrast in toddlers' perceptions of mispronunciations.
Collapse
Affiliation(s)
- Dana E Bernier
- Department of Psychology, University of Waterloo, Ontario, Canada
| | | |
Collapse
|
22
|
Trudeau-Fisette P, Ito T, Ménard L. Auditory and Somatosensory Interaction in Speech Perception in Children and Adults. Front Hum Neurosci 2019; 13:344. [PMID: 31636554 PMCID: PMC6788346 DOI: 10.3389/fnhum.2019.00344] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 09/18/2019] [Indexed: 11/28/2022] Open
Abstract
Multisensory integration (MSI) allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19–30) and 14 children (age 5–6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.
Collapse
Affiliation(s)
- Paméla Trudeau-Fisette
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| | - Takayuki Ito
- GIPSA-Lab, CNRS, Grenoble INP, Université Grenoble Alpes, Grenoble, France.,Haskins Laboratories, Yale University, New Haven, CT, United States
| | - Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| |
Collapse
|
23
|
Johnson EK, White KS. Developmental sociolinguistics: Children's acquisition of language variation. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1515. [PMID: 31454182 DOI: 10.1002/wcs.1515] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Revised: 07/11/2019] [Accepted: 07/12/2019] [Indexed: 11/10/2022]
Abstract
Developmental sociolinguistics is a rapidly evolving interdisciplinary framework that builds upon theoretical and methodological contributions from multiple disciplines (i.e., sociolinguistics, language acquisition, the speech sciences, developmental psychology, and psycholinguistics). A core assumption of this framework is that language is by its very nature variable, and that much of this variability is informative, as it is (probabilistically) governed by a variety of factors-including linguistic context, social or cultural context, the relationship between speaker and addressee, a language user's geographic origin, and a language user's gender identity. It is becoming increasingly clear that consideration of these factors is absolutely essential to developing realistic and ecologically valid models of language development. Given the central importance of language in our social world, a more complete understanding of early social development will also require a deeper understanding of when and how language variation influences children's social inferences and behavior. As the cross-pollination between formerly disparate fields continues, we anticipate a paradigm shift in the way many language researchers conceptualize the challenge of early acquisition. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Language Acquisition Neuroscience > Development Psychology > Language.
Collapse
Affiliation(s)
| | - Katherine S White
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
24
|
Majorano M, Bastianello T, Morelli M, Lavelli M, Vihman MM. Vocal production and novel word learning in the first year. JOURNAL OF CHILD LANGUAGE 2019; 46:606-616. [PMID: 30632478 DOI: 10.1017/s0305000918000521] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Previous studies have demonstrated an effect of early vocal production on infants' speech processing and later vocabulary. This study focuses on the relationship between vocal production and new word learning. Thirty monolingual Italian-learning infants were recorded at about 11 months, to establish the extent of their consonant production. In parallel, the infants were trained on novel word-object pairs, two consisting of early learned consonants (ELC), two consisting of late learned consonants (LLC). Word learning was assessed through Preferential Looking. The results suggest that vocal production supports word learning: Only children with higher, consistent consonant production attended more to the trained ELC images.
Collapse
Affiliation(s)
| | | | - Marika Morelli
- Department of Human Sciences,University of Verona,Verona,Italy
| | - Manuela Lavelli
- Department of Human Sciences,University of Verona,Verona,Italy
| | - Marilyn M Vihman
- Department of Language and Linguistic Science,University of York,Heslington,York,UK
| |
Collapse
|
25
|
Imafuku M, Kanakogi Y, Butler D, Myowa M. Demystifying infant vocal imitation: The roles of mouth looking and speaker's gaze. Dev Sci 2019; 22:e12825. [PMID: 30980494 DOI: 10.1111/desc.12825] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 01/08/2019] [Accepted: 03/01/2019] [Indexed: 12/20/2022]
Abstract
Vocal imitation plays a fundamental role in human language acquisition from infancy. Little is known, however, about how infants imitate other's sounds. We focused on three factors: (a) whether infants receive information from upright faces, (b) the infant's observation of the speaker's mouth and (c) the speaker directing their gaze towards the infant. We recorded the eye movements of 6-month-olds who participated in experiments watching videos of a speaker producing vowel sounds. We found that an infants' tendency to vocally imitate such videos increased as a function of (a) seeing upright rather than inverted faces, (b) their increased looking towards the speaker's mouth and (c) whether the speaker directed their gaze towards, rather than away from infants. These latter findings are consistent with theories of motor resonance and natural pedagogy respectively. New light has been shed on the cues and underlying mechanisms linking infant speech perception and production.
Collapse
Affiliation(s)
- Masahiro Imafuku
- Graduate School of Education, Kyoto University, Kyoto, Japan.,Faculty of Education, Musashino University, Tokyo, Japan
| | | | - David Butler
- Graduate School of Education, Kyoto University, Kyoto, Japan.,The Institute for Social Neuroscience Psychology, Heidelberg, Victoria, Australia
| | - Masako Myowa
- Graduate School of Education, Kyoto University, Kyoto, Japan
| |
Collapse
|
26
|
Vilain A, Dole M, Lœvenbruck H, Pascalis O, Schwartz JL. The role of production abilities in the perception of consonant category in infants. Dev Sci 2019; 22:e12830. [PMID: 30908771 DOI: 10.1111/desc.12830] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 02/25/2019] [Accepted: 03/06/2019] [Indexed: 12/01/2022]
Abstract
The influence of motor knowledge on speech perception is well established, but the functional role of the motor system is still poorly understood. The present study explores the hypothesis that speech production abilities may help infants discover phonetic categories in the speech stream, in spite of coarticulation effects. To this aim, we examined the influence of babbling abilities on consonant categorization in 6- and 9-month-old infants. Using an intersensory matching procedure, we investigated the infants' capacity to associate auditory information about a consonant in various vowel contexts with visual information about the same consonant, and to map auditory and visual information onto a common phoneme representation. Moreover, a parental questionnaire evaluated the infants' consonantal repertoire. In a first experiment using /b/-/d/ consonants, we found that infants who displayed babbling abilities and produced the /b/ and/or the /d/ consonants in repetitive sequences were able to correctly perform intersensory matching, while non-babblers were not. In a second experiment using the /v/-/z/ pair, which is as visually contrasted as the /b/-/d/ pair but which is usually not produced at the tested ages, no significant matching was observed, for any group of infants, babbling or not. These results demonstrate, for the first time, that the emergence of babbling could play a role in the extraction of vowel-independent representations for consonant place of articulation. They have important implications for speech perception theories, as they highlight the role of sensorimotor interactions in the development of phoneme representations during the first year of life.
Collapse
Affiliation(s)
- Anne Vilain
- GIPSA-Lab, Speech & Cognition Department, CNRS, Université Grenoble Alpes, Grenoble INP, Grenoble, France
| | - Marjorie Dole
- GIPSA-Lab, Speech & Cognition Department, CNRS, Université Grenoble Alpes, Grenoble INP, Grenoble, France
| | - Hélène Lœvenbruck
- LPNC, CNRS, Université Grenoble Alpes, Université Savoie Mont Blanc, Grenoble, France
| | - Olivier Pascalis
- LPNC, CNRS, Université Grenoble Alpes, Université Savoie Mont Blanc, Grenoble, France
| | - Jean-Luc Schwartz
- GIPSA-Lab, Speech & Cognition Department, CNRS, Université Grenoble Alpes, Grenoble INP, Grenoble, France
| |
Collapse
|
27
|
Hoareau M, Yeung HH, Nazzi T. Infants' statistical word segmentation in an artificial language is linked to both parental speech input and reported production abilities. Dev Sci 2019; 22:e12803. [PMID: 30681753 DOI: 10.1111/desc.12803] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 11/26/2018] [Accepted: 01/14/2019] [Indexed: 01/11/2023]
Abstract
Individual variability in infant's language processing is partly explained by environmental factors, like the quantity of parental speech input, as well as by infant-specific factors, like speech production. Here, we explore how these factors affect infant word segmentation. We used an artificial language to ensure that only statistical regularities (like transitional probabilities between syllables) could cue word boundaries, and then asked how the quantity of parental speech input and infants' babbling repertoire predict infants' abilities to use these statistical cues. We replicated prior reports showing that 8-month-old infants use statistical cues to segment words, with a preference for part-words over words (a novelty effect). Crucially, 8-month-olds with larger novelty effects had received more speech input at 4 months and had greater production abilities at 8 months. These findings establish for the first time that the ability to extract statistical information from speech correlates with individual factors in infancy, like early speech experience and language production. Implications of these findings for understanding individual variability in early language acquisition are discussed.
Collapse
Affiliation(s)
- Mélanie Hoareau
- Integrative Neuroscience and Cognition Center, Université Paris Descartes, Sorbonne Paris Cité, Paris, France
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Thierry Nazzi
- Integrative Neuroscience and Cognition Center, Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS (Integrative Neuroscience and Cognition Center, UMR 8002), Paris, France
| |
Collapse
|
28
|
|
29
|
Zamuner TS, Strahm S, Morin-Lessard E, Page MPA. Reverse production effect: children recognize novel words better when they are heard rather than produced. Dev Sci 2017; 21:e12636. [PMID: 29143412 DOI: 10.1111/desc.12636] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 08/31/2017] [Indexed: 11/30/2022]
Abstract
This research investigates the effect of production on 4.5- to 6-year-old children's recognition of newly learned words. In Experiment 1, children were taught four novel words in a produced or heard training condition during a brief training phase. In Experiment 2, children were taught eight novel words, and this time training condition was in a blocked design. Immediately after training, children were tested on their recognition of the trained novel words using a preferential looking paradigm. In both experiments, children recognized novel words that were produced and heard during training, but demonstrated better recognition for items that were heard. These findings are opposite to previous results reported in the literature with adults and children. Our results show that benefits of speech production for word learning are dependent on factors such as task complexity and the developmental stage of the learner.
Collapse
Affiliation(s)
- Tania S Zamuner
- Department of Linguistics, University of Ottawa, Ottawa, Canada
| | | | | | - Michael P A Page
- Department of Psychology, University of Hertfordshire, Hatfield, Hertfordshire, UK
| |
Collapse
|
30
|
Fagan MK, Doveikis KN. Ordinary Interactions Challenge Proposals That Maternal Verbal Responses Shape Infant Vocal Development. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2819-2827. [PMID: 28973108 DOI: 10.1044/2017_jslhr-s-16-0005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2016] [Accepted: 05/27/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study tested proposals that maternal verbal responses shape infant vocal development, proposals based in part on evidence that infants modified their vocalizations to match mothers' experimentally manipulated vowel or consonant-vowel responses to most (i.e., 70%-80%) infant vocalizations. We tested the proposal in ordinary rather than experimentally manipulated interactions. METHOD Response-based proposals were tested in a cross-sectional study of 35 infants, ages 4 to 14 months, engaged in everyday interactions in their homes with their mothers using a standard set of toys and picture books. RESULTS Mothers responded to 30% of infant vocalizations with vocal behaviors of their own, far fewer than experimentally manipulated response rates. Moreover, mothers produced comparatively few vowel and consonant-vowel models and responded to infants' vowel and consonant-vowel vocalizations in similar numbers. Infants showed little evidence of systematically modifying their vocal forms to match maternal responses in these interactions. Instead, consonant-vowel vocalizations increased significantly with infant age. CONCLUSIONS Results obtained in ordinary interactions, rather than response manipulation, did not provide substantial support for response-based mechanisms of infant vocal development. Consistent with other research, however, consonant-vowel productions increased with infant age.
Collapse
Affiliation(s)
- Mary K Fagan
- Department of Communication Sciences and Disorders, Chapman University, Irvine, CA
| | - Kate N Doveikis
- Department of Communication Science and Disorders, University of Missouri, Columbia
| |
Collapse
|
31
|
Affiliation(s)
- Titia Benders
- Center for Language Studies; Radboud University Nijmegen
| |
Collapse
|
32
|
McGillion M, Herbert JS, Pine J, Vihman M, dePaolis R, Keren-Portnoy T, Matthews D. What Paves the Way to Conventional Language? The Predictive Value of Babble, Pointing, and Socioeconomic Status. Child Dev 2016; 88:156-166. [DOI: 10.1111/cdev.12671] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Julian Pine
- University of Liverpool
- ESRC International Centre for Language and Communicative Development (LuCiD)
| | | | | | | | | |
Collapse
|
33
|
Giannecchini T, Yucubian-Fernandes A, Maximino LP. Praxia não verbal na fonoaudiologia: revisão de literatura. REVISTA CEFAC 2016. [DOI: 10.1590/1982-021620161856816] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
RESUMO A Fala é definida como representação motora da Linguagem, em que há a coordenação de três processos neurológicos: organização de conceitos, formulação e expressão simbólica; programação do ato motor envolvido na produção da fala; e sua própria produção motora. O Controle Motor da Fala, que ordena a contração muscular para a sua execução, inclui o planejamento, a preparação de movimentos e a execução de planos para resultar em contrações musculares e deslocamentos de estruturas que culminarão na articulação da Fala. Os estudos científicos nacionais e internacionais vislumbram um novo campo de atuação fonoaudiológica para o trabalho com a fala alterada, com a estimulação da Praxias Não Verbais. O objetivo deste trabalho é revisar, na bibliografia, o tratamento dado às praxias orais e não verbais e pontuar suas aplicações clínicas no âmbito fonoaudiológico. Realizou-se uma busca nas bases de dados PubMed, Lilacs e Scielo. As 40 citações selecionadas foram avaliadas de forma crítica. Os artigos mostraram que a Praxia Não Verbal pode ser estimulada para o trabalho clínico com a Fala, no entanto, não há descrição do trabalho fonoaudiológico, tampouco um detalhamento dos exercícios em sequência que poderiam ser utilizados. Nenhum artigo apontou para o modo como as Praxias Não Verbais deveriam ser trabalhadas, nem mesmo como estimular a programação motora para a Fala. Este estudo propõe a necessidade clínica de criar instrumentos de intervenção fonoaudiológica que incluam a estimulação das Praxias Não Verbais para o trabalho com a articulação da Fala.
Collapse
|
34
|
Abstract
To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6-12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate. Moreover, mainstream assumptions that perception relies on acoustic patterns whereas production involves motor patterns entail that the infant would have to translate incommensurable information to grasp the perception-production relationship. We posit the more parsimonious view that both domains depend on commensurate articulatory information. Our proposed framework combines principles of the Perceptual Assimilation Model (PAM) and Articulatory Phonology (AP). According to PAM, infants attune to articulatory information in native speech and detect similarities of nonnative phones to native articulatory patterns. The AP premise that gestures of the speech organs are the basic elements of phonology offers articulatory similarity metrics while satisfying the requirement that phonological information be discrete and contrastive: (a) distinct articulatory organs produce vocal tract constrictions and (b) phonological contrasts recruit different articulators and/or constrictions of a given articulator that differ in degree or location. Various lines of research suggest young children perceive articulatory information, which guides their productions: discrimination of between- versus within-organ contrasts, simulations of attunement to language-specific articulatory distributions, multimodal speech perception, oral/vocal imitation, and perceptual effects of articulator activation or suppression. We conclude that articulatory gesture information serves as the foundation for developmental integrality of speech perception and production.
Collapse
Affiliation(s)
- Catherine T. Best
- MARCS Institute, Western Sydney University
- School of Humanities and Communication Arts, Western Sydney University
- Haskins Laboratories
| | - Louis M. Goldstein
- Haskins Laboratories
- Department of Linguistics, University of Southern California
| | - Hosung Nam
- Haskins Laboratories
- Department of English Language and Literature, Korea University
| | - Michael D. Tyler
- MARCS Institute, Western Sydney University
- School of Social Sciences and Psychology, Western Sydney University
| |
Collapse
|
35
|
Vihman MM. Learning words and learning sounds: Advances in language development. Br J Psychol 2016; 108:1-27. [PMID: 27449816 DOI: 10.1111/bjop.12207] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Revised: 06/13/2016] [Indexed: 10/21/2022]
Abstract
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.
Collapse
|
36
|
Lee CC, Jhang Y, Chen LM, Relyea G, Oller DK. Subtlety of Ambient-Language Effects in Babbling: A Study of English- and Chinese-Learning Infants at 8, 10, and 12 Months. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2016; 13:100-126. [PMID: 28496393 PMCID: PMC5421641 DOI: 10.1080/15475441.2016.1180983] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Prior research on ambient-language effects in babbling has often suggested infants produce language-specific phonological features within the first year. These results have been questioned in research failing to find such effects and challenging the positive findings on methodological grounds. We studied English- and Chinese-learning infants at 8, 10, and 12 months and found listeners could not detect ambient-language effects in the vast majority of infant utterances, but only in items deemed to be words or to contain canonical syllables that may have made them sound like words with language-specific shapes. Thus, the present research suggests the earliest ambient-language effects may be found in emerging lexical items or in utterances influenced by language-specific features of lexical items. Even the ambient-language effects for infant canonical syllables and words were very small compared with ambient-language effects for meaningless but phonotactically well-formed syllable sequences spoken by adult native speakers of English and Chinese.
Collapse
Affiliation(s)
- Chia-Cheng Lee
- School of Communication Sciences and Disorders, The University of Memphis
| | - Yuna Jhang
- School of Communication Sciences and Disorders, The University of Memphis
| | - Li-mei Chen
- Department of Foreign Languages and Literature, National Cheng Kung University
| | | | - D. Kimbrough Oller
- School of Communication Sciences and Disorders, The University of Memphis
- The Konrad Lorenz Institute for Evolution and Cognition Research
| |
Collapse
|
37
|
DePaolis RA, Keren-Portnoy T, Vihman M. Making Sense of Infant Familiarity and Novelty Responses to Words at Lexical Onset. Front Psychol 2016; 7:715. [PMID: 27242624 PMCID: PMC4870251 DOI: 10.3389/fpsyg.2016.00715] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 04/27/2016] [Indexed: 11/13/2022] Open
Abstract
This study suggests that familiarity and novelty preferences in infant experimental tasks can in some instances be interpreted together as a single indicator of language advance. We provide evidence to support this idea based on our use of the auditory headturn preference paradigm to record responses to words likely to be either familiar or unfamiliar to infants. Fifty-nine 10-month-old infants were tested. The task elicited mixed preferences: familiarity (longer average looks to the words likely to be familiar to the infants), novelty (longer average looks to the words likely to be unfamiliar) and no-preference (similar-length of looks to both type of words). The infants who exhibited either a familiarity or a novelty response were more advanced on independent indices of phonetic advance than the infants who showed no preference. In addition, infants exhibiting novelty responses were more lexically advanced than either the infants who exhibited familiarity or those who showed no-preference. The results provide partial support for Hunter and Ames' (1988) developmental model of attention in infancy and suggest caution when interpreting studies indexed to chronological age.
Collapse
Affiliation(s)
- Rory A DePaolis
- Communication Sciences and Disorders, James Madison University, Harrisonburg VA, USA
| | | | - Marilyn Vihman
- Language and Linguistic Science, University of York York, UK
| |
Collapse
|
38
|
Warlaumont AS, Finnegan MK. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity. PLoS One 2016; 11:e0145096. [PMID: 26808148 PMCID: PMC4726623 DOI: 10.1371/journal.pone.0145096] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Accepted: 11/29/2015] [Indexed: 11/19/2022] Open
Abstract
At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant's nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model's frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one's own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop in infancy but also for our understanding of how they may have evolved.
Collapse
Affiliation(s)
- Anne S. Warlaumont
- Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States of America
| | - Megan K. Finnegan
- Speech & Hearing Sciences, University of Illinois at Urbana-Champaign, Champaign, IL, United States of America
| |
Collapse
|
39
|
Abstract
The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.
Collapse
|
40
|
Streri A, Coulon M, Marie J, Yeung HH. Developmental Change in Infants' Detection of Visual Faces that Match Auditory Vowels. INFANCY 2015. [DOI: 10.1111/infa.12104] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Arlette Streri
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - Marion Coulon
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - Julien Marie
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - H. Henny Yeung
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
- The Centre National de la Recherche Scientifique
| |
Collapse
|
41
|
Fagan MK. Why repetition? Repetitive babbling, auditory feedback, and cochlear implantation. J Exp Child Psychol 2015; 137:125-36. [PMID: 25974171 PMCID: PMC4442053 DOI: 10.1016/j.jecp.2015.04.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2015] [Revised: 04/10/2015] [Accepted: 04/10/2015] [Indexed: 11/19/2022]
Abstract
This study investigated the reduplicated, or repetitive vocalizations of hearing infants and infants with profound hearing loss with and without cochlear implants using a new measure of repetition in order to address questions not only about the effects of cochlear implantation on repetitive babbling, but also about the reason repetitive vocalizations occur at all and why they emerge around 7 or 8 months of age in hearing infants. Participants were 16 infants with profound hearing loss and 27 hearing infants who participated at a mean age of 9.9 months and/or a mean age of 17.7 months. Mean age at cochlear implantation for infants with profound hearing loss was 12.9 months, and mean duration of implant use was 4.2 months. The data show that before cochlear implantation, repetitive vocalizations were rare. However, 4 months after cochlear implant activation, infants with hearing loss produced both repetitive vocalizations and repetitions per vocalization at levels commensurate with their hearing peers. The results support the hypothesis that repetition emerges as a means of vocal exploration during the time when hearing infants (and infants with cochlear implants) form auditory-motor representations and neural connections between cortical areas active in syllable production and syllable perception, during the transition from nonlinguistic to linguistic vocalization.
Collapse
Affiliation(s)
- Mary K Fagan
- Department of Communication Science and Disorders, University of Missouri, Columbia, MO 65211, USA.
| |
Collapse
|
42
|
Curtin S, Zamuner TS. Understanding the developing sound system: interactions between sounds and words. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 5:589-602. [PMID: 26308747 DOI: 10.1002/wcs.1307] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 06/17/2014] [Accepted: 07/18/2014] [Indexed: 11/07/2022]
Abstract
UNLABELLED Over the course of the first 2 years of life, infants are learning a great deal about the sound system of their native language. Acquiring the sound system requires the infant to learn about sounds and their distributions, sound combinations, and prosodic information, such as syllables, rhythm, and stress. These aspects of the phonological system are being learned simultaneously as the infant experiences the language around him or her. What binds all of the phonological units is the context in which they occur, namely, words. In this review, we explore the development of phonetics and phonology by showcasing the interactive nature of the developing lexicon and sound system with a focus on perception. We first review seminal research in the foundations of phonological development. We then discuss early word recognition and learning followed by a discussion of phonological and lexical representations. We conclude by discussing the interactive nature of lexical and phonological representations and highlight some further directions for exploring the developing sound system. WIREs Cogn Sci 2014, 5:589-602. doi: 10.1002/wcs.1307 For further resources related to this article, please visit the WIREs website. CONFLICT OF INTEREST The authors have declared no conflicts of interest for this article.
Collapse
Affiliation(s)
- Suzanne Curtin
- Department of Psycology, University of Calgary, Calgary, Alberta, Canada
| | - Tania S Zamuner
- Department of Linguistics, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
43
|
Masapollo M, Polka L, Ménard L. When infants talk, infants listen: pre-babbling infants prefer listening to speech with infant vocal properties. Dev Sci 2015; 19:318-28. [DOI: 10.1111/desc.12298] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2014] [Accepted: 12/12/2014] [Indexed: 10/23/2022]
Affiliation(s)
- Matthew Masapollo
- School of Communication Sciences & Disorders; McGill University; Canada
- Centre for Research on Brain; Language & Music; McGill University; Canada
| | - Linda Polka
- School of Communication Sciences & Disorders; McGill University; Canada
- Centre for Research on Brain; Language & Music; McGill University; Canada
| | - Lucie Ménard
- Centre for Research on Brain; Language & Music; McGill University; Canada
- Département de Linguistique; Université du Québec à Montréal; Canada
| |
Collapse
|
44
|
AMBRIDGE BEN, KIDD EVAN, ROWLAND CAROLINEF, THEAKSTON ANNAL. The ubiquity of frequency effects in first language acquisition. JOURNAL OF CHILD LANGUAGE 2015; 42:239-73. [PMID: 25644408 PMCID: PMC4531466 DOI: 10.1017/s030500091400049x] [Citation(s) in RCA: 123] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Revised: 04/17/2014] [Accepted: 07/13/2014] [Indexed: 05/21/2023]
Abstract
This review article presents evidence for the claim that frequency effects are pervasive in children's first language acquisition, and hence constitute a phenomenon that any successful account must explain. The article is organized around four key domains of research: children's acquisition of single words, inflectional morphology, simple syntactic constructions, and more advanced constructions. In presenting this evidence, we develop five theses. (i) There exist different types of frequency effect, from effects at the level of concrete lexical strings to effects at the level of abstract cues to thematic-role assignment, as well as effects of both token and type, and absolute and relative, frequency. High-frequency forms are (ii) early acquired and (iii) prevent errors in contexts where they are the target, but also (iv) cause errors in contexts in which a competing lower-frequency form is the target. (v) Frequency effects interact with other factors (e.g. serial position, utterance length), and the patterning of these interactions is generally informative with regard to the nature of the learning mechanism. We conclude by arguing that any successful account of language acquisition, from whatever theoretical standpoint, must be frequency sensitive to the extent that it can explain the effects documented in this review, and outline some types of account that do and do not meet this criterion.
Collapse
Affiliation(s)
- BEN AMBRIDGE
- University of LiverpoolESRC International Centre for Language and Communicative Development (LuCiD)
| | - EVAN KIDD
- Australian National UniversityARC Centre of Excellence for the Dynamics of Language ESRC International Centre for Language and Communicative Development (LuCiD)
| | - CAROLINE F. ROWLAND
- University of LiverpoolESRC International Centre for Language and Communicative Development (LuCiD)
| | - ANNA L. THEAKSTON
- University of ManchesterESRC International Centre for Language and Communicative Development (LuCiD)
| |
Collapse
|
45
|
Altvater-Mackensen N, Grossmann T. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories. Child Dev 2014; 86:362-78. [DOI: 10.1111/cdev.12320] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
46
|
Floccia C, Nazzi T, Delle Luche C, Poltrock S, Goslin J. English-learning one- to two-year-olds do not show a consonant bias in word learning. JOURNAL OF CHILD LANGUAGE 2014; 41:1085-1114. [PMID: 23866758 DOI: 10.1017/s0305000913000287] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Following the proposal that consonants are more involved than vowels in coding the lexicon (Nespor, Peña & Mehler, 2003), an early lexical consonant bias was found from age 1;2 in French but an equal sensitivity to consonants and vowels from 1;0 to 2;0 in English. As different tasks were used in French and English, we sought to clarify this ambiguity by using an interactive word-learning study similar to that used in French, with British-English-learning toddlers aged 1;4 and 1;11. Children were taught two CVC labels differing on either a consonant or vowel and tested on their pairing of a third object named with one of the previously taught labels, or part of them. In concert with previous research on British-English toddlers, our results provided no evidence of a general consonant bias. The language-specific mechanisms explaining the differential status for consonants and vowels in lexical development are discussed.
Collapse
|
47
|
Abstract
Verbal memory is a fundamental prerequisite for language learning. This study investigated 7-month-olds' (N = 62) ability to remember the identity and order of elements in a multisyllabic word. The results indicate that infants detect changes in the order of edge syllables, or the identity of the middle syllables, but fail to encode the order of middle syllables. This suggests that the representational format of multisyllabic words is determined by core mnemonic biases, which favor accurate encoding of edges and limits the encoding of temporal order for internal segments. The studies support accounts proposing that content and order are encoded separately; in addition, the data show that this dissociation occurs early in development.
Collapse
Affiliation(s)
- Silvia Benavides-Varela
- International School for Advanced Studies (SISSA, ISAS); IRCCS Fondazione Ospedale San Camillo Lido-Venice
| | | |
Collapse
|
48
|
Guellaï B, Streri A, Yeung HH. The development of sensorimotor influences in the audiovisual speech domain: some critical questions. Front Psychol 2014; 5:812. [PMID: 25147528 PMCID: PMC4123602 DOI: 10.3389/fpsyg.2014.00812] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 07/09/2014] [Indexed: 11/13/2022] Open
Abstract
Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.
Collapse
Affiliation(s)
- Bahia Guellaï
- Laboratoire Ethologie, Cognition, Développement, Université Paris Ouest Nanterre La Défense, NanterreFrance
| | - Arlette Streri
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
| | - H. Henny Yeung
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
- Université Paris Descartes, Paris Sorbonne Cité, ParisFrance
| |
Collapse
|
49
|
The role of the input on the development of the LC bias: a crosslinguistic comparison. Cognition 2014; 132:301-11. [PMID: 24858107 DOI: 10.1016/j.cognition.2014.04.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2013] [Revised: 04/11/2014] [Accepted: 04/12/2014] [Indexed: 11/20/2022]
Abstract
Previous studies have described the existence of a phonotactic bias called the Labial-Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. "bat") than the opposite CL pattern (i.e. "tap"). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
Collapse
|
50
|
Delle Luche C, Durrant S, Floccia C, Plunkett K. Implicit meaning in 18-month-old toddlers. Dev Sci 2014; 17:948-55. [PMID: 24628995 DOI: 10.1111/desc.12164] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2013] [Accepted: 11/12/2013] [Indexed: 11/27/2022]
Abstract
A substantial body of evidence demonstrates that infants understand the meaning of spoken words from as early as 6 months. Yet little is known about their ability to do so in the absence of any visual referent, which would offer diagnostic evidence for an adult-like, symbolic interpretation of words and their use in language mediated thought. We used the head-turn preference procedure to examine whether infants can generate implicit meanings from word forms alone as early as 18 months of age, and whether they are sensitive to meaningful relationships between words. In one condition, toddlers were presented with lists of words taken from the same taxonomic category (e.g. animals or body parts). In a second condition, words taken from two other categories (e.g. clothes and food items) were interleaved within the same list. Listening times were found to be longer in the related-category condition than in the mixed-category condition, suggesting that infants extract the meaning of spoken words and are sensitive to the semantic relatedness between these words. Our results show that infants have begun to construct the rudiments of a semantic system based on taxonomic relations even before they enter a period of accelerated vocabulary growth.
Collapse
|