51
|
Yeung HH, Chen LM, Werker JF. Referential labeling can facilitate phonetic learning in infancy. Child Dev 2015; 85:1036-49. [PMID: 24936610 DOI: 10.1111/cdev.12185] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
All languages employ certain phonetic contrasts when distinguishing words. Infant speech perception is rapidly attuned to these contrasts before many words are learned, thus phonetic attunement is thought to proceed independently of lexical and referential knowledge. Here, evidence to the contrary is provided. Ninety-eight 9-month-old English-learning infants were trained to perceive a non-native Cantonese tone contrast.Two object–tone audiovisual pairings were consistently presented, which highlighted the target contrast (Object A with Tone X; Object B with Tone Y). Tone discrimination was then assessed. Results showed improved tone discrimination if object–tone pairings were perceived as being referential word labels, although this effect was modulated by vocabulary size. Results suggest how lexical and referential knowledge could play a role in phonetic attunement.
Collapse
|
52
|
Erickson LC, Thiessen ED. Statistical learning of language: Theory, validity, and predictions of a statistical learning account of language acquisition. DEVELOPMENTAL REVIEW 2015. [DOI: 10.1016/j.dr.2015.05.002] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
53
|
Abstract
This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants’ auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution.
Collapse
Affiliation(s)
- Jia Hoong Ong
- The MARCS Institute, University of Western Sydney, Sydney, New South Wales, Australia
- * E-mail:
| | - Denis Burnham
- The MARCS Institute, University of Western Sydney, Sydney, New South Wales, Australia
| | - Paola Escudero
- The MARCS Institute, University of Western Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
54
|
Comins JA, Gentner TQ. Pattern-Induced Covert Category Learning in Songbirds. Curr Biol 2015; 25:1873-7. [PMID: 26119748 PMCID: PMC4626452 DOI: 10.1016/j.cub.2015.05.046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Revised: 04/07/2015] [Accepted: 05/20/2015] [Indexed: 11/30/2022]
Abstract
Language is uniquely human, but its acquisition may involve cognitive capacities shared with other species. During development, language experience alters speech sound (phoneme) categorization. Newborn infants distinguish the phonemes in all languages but by 10 months show adult-like greater sensitivity to native language phonemic contrasts than non-native contrasts. Distributional theories account for phonetic learning by positing that infants infer category boundaries from modal distributions of speech sounds along acoustic continua. For example, tokens of the sounds /b/ and /p/ cluster around different mean voice onset times. To disambiguate overlapping distributions, contextual theories propose that phonetic category learning is informed by higher-level patterns (e.g., words) in which phonemes normally occur. For example, the vowel sounds /Ι/ and /e/ can occupy similar perceptual spaces but can be distinguished in the context of "with" and "well." Both distributional and contextual cues appear to function in speech acquisition. Non-human species also benefit from distributional cues for category learning, but whether category learning benefits from contextual information in non-human animals is unknown. The use of higher-level patterns to guide lower-level category learning may reflect uniquely human capacities tied to language acquisition or more general learning abilities reflecting shared neurobiological mechanisms. Using songbirds, European starlings, we show that higher-level pattern learning covertly enhances categorization of the natural communication sounds. This observation mirrors the support for contextual theories of phonemic category learning in humans and demonstrates a general form of learning not unique to humans or language.
Collapse
Affiliation(s)
- Jordan A Comins
- Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA
| | - Timothy Q Gentner
- Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA; Section of Neurobiology, University of California San Diego, La Jolla, CA 92093, USA; Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA; Kavli Institute for Brain and Mind, University of California San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
55
|
The development of voicing categories: a quantitative review of over 40 years of infant speech perception research. Psychon Bull Rev 2015; 21:884-906. [PMID: 24550074 DOI: 10.3758/s13423-013-0569-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Most research on infant speech categories has relied on measures of discrimination. Such work often employs categorical perception as a linking hypothesis to enable inferences about categorization on the basis of discrimination measures. However, a large number of studies with adults challenge the utility of categorical perception in describing adult speech perception, and this in turn calls into question how to interpret measures of infant speech discrimination. We propose here a parallel channels model of discrimination (built on Pisoni and Tash Perception & Psychophysics, 15(2), 285-290, 1974), which posits that both a noncategorical or veridical encoding of speech cues and category representations can simultaneously contribute to discrimination. This can thus produce categorical perception effects without positing any warping of the acoustic signal, but it also reframes how we think about infant discrimination and development. We test this model by conducting a quantitative review of 20 studies examining infants' discrimination of voice onset time contrasts. This review suggests that within-category discrimination is surprisingly prevalent even in classic studies and that, averaging across studies, discrimination is related to continuous acoustic distance. It also identifies several methodological factors that may mask our ability to see this. Finally, it suggests that infant discrimination may improve over development, contrary to commonly held notion of perceptual narrowing. These results are discussed in terms of theories of speech development that may require such continuous sensitivity.
Collapse
|
56
|
Tsuji S, Nishikawa K, Mazuka R. Segmental distributions and consonant-vowel association patterns in Japanese infant- and adult-directed speech. JOURNAL OF CHILD LANGUAGE 2014; 41:1276-1304. [PMID: 24229534 DOI: 10.1017/s0305000913000469] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Japanese infant-directed speech (IDS) and adult-directed speech (ADS) were compared on their segmental distributions and consonant-vowel association patterns. Consistent with findings in other languages, a higher ratio of segments that are generally produced early was found in IDS compared to ADS: more labial consonants and low-central vowels, but fewer fricatives. Consonant-vowel associations also favored the early produced labial-central, coronal-front, coronal-central, and dorsal-back patterns. On the other hand, clear language-specific patterns included a higher frequency of dorsals, affricates, geminates, and moraic nasals in IDS. These segments are frequent in adult Japanese, but not in the early productions or the IDS of other studied languages. In combination with previous results, the current study suggests that both fine-tuning (an increased use of early produced segments) and highlighting (an increased use of language-specifically relevant segments) might modify IDS on the segmental level.
Collapse
Affiliation(s)
- Sho Tsuji
- Radboud Universiteit Nijmegen,International Max-Planck Research School for Language Sciences,and Laboratory for Language Development,RIKEN Brain Sciences Institute
| | - Kenya Nishikawa
- Laboratory for Language Development,RIKEN Brain Sciences Institute
| | - Reiko Mazuka
- Laboratory for Language Development,RIKEN Brain Sciences Institute, and Duke University
| |
Collapse
|
57
|
Watson TL, Robbins RA, Best CT. Infant perceptual development for faces and spoken words: an integrated approach. Dev Psychobiol 2014; 56:1454-81. [PMID: 25132626 PMCID: PMC4231232 DOI: 10.1002/dev.21243] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 06/24/2014] [Indexed: 11/10/2022]
Abstract
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception.
Collapse
Affiliation(s)
- Tamara L Watson
- School of Social Science and Psychology, University of Western SydneyNew South Wales, Australia
- MARCS Institute, University of Western SydneyNew South Wales, Australia
| | - Rachel A Robbins
- School of Social Science and Psychology, University of Western SydneyNew South Wales, Australia
| | - Catherine T Best
- MARCS Institute, University of Western SydneyNew South Wales, Australia
- School of Humanities and Communication Arts, University of Western SydneyNew South Wales, Australia
| |
Collapse
|
58
|
Kim HI, Johnson SP. Detecting 'infant-directedness' in face and voice. Dev Sci 2014; 17:621-7. [PMID: 24576091 PMCID: PMC4069237 DOI: 10.1111/desc.12146] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2012] [Accepted: 09/17/2013] [Indexed: 11/29/2022]
Abstract
Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces.
Collapse
Affiliation(s)
- Hojin I Kim
- Department of Psychology, University of California, Los Angeles, USA
| | | |
Collapse
|
59
|
Object labeling influences infant phonetic learning and generalization. Cognition 2014; 132:151-63. [PMID: 24809743 DOI: 10.1016/j.cognition.2014.04.001] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 03/31/2014] [Accepted: 04/01/2014] [Indexed: 11/23/2022]
Abstract
Different kinds of speech sounds are used to signify possible word forms in every language. For example, lexical stress is used in Spanish (/'be.be/, 'he/she drinks' versus /be.'be/, 'baby'), but not in French (/'be.be/ and /be.'be/ both mean 'baby'). Infants learn many such native language phonetic contrasts in their first year of life, likely using a number of cues from parental speech input. One such cue could be parents' object labeling, which can explicitly highlight relevant contrasts. Here we ask whether phonetic learning from object labeling is abstract-that is, if learning can generalize to new phonetic contexts. We investigate this issue in the prosodic domain, as the abstraction of prosodic cues (like lexical stress) has been shown to be particularly difficult. One group of 10-month-old French-learners was given consistent word labels that contrasted on lexical stress (e.g., Object A was labeled /'ma.bu/, and Object B was labeled /ma.'bu/). Another group of 10-month-olds was given inconsistent word labels (i.e., mixed pairings), and stress discrimination in both groups was measured in a test phase with words made up of new syllables. Infants trained with consistently contrastive labels showed an earlier effect of discrimination compared to infants trained with inconsistent labels. Results indicate that phonetic learning from object labeling can indeed generalize, and suggest one way infants may learn the sound properties of their native language(s).
Collapse
|
60
|
Danielson DK, Seidl A, Onishi KH, Alamian G, Cristia A. The acoustic properties of bilingual infant-directed speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:EL95-EL101. [PMID: 25234921 DOI: 10.1121/1.4862881] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Does the acoustic input for bilingual infants equal the conjunction of the input heard by monolinguals of each separate language? The present letter tackles this question, focusing on maternal speech addressed to 11-month-old infants, on the cusp of perceptual attunement. The acoustic characteristics of the point vowels /a,i,u/ were measured in the spontaneous infant-directed speech of French-English bilingual mothers, as well as in the speech of French and English monolingual mothers. Bilingual caregivers produced their two languages with acoustic prosodic separation equal to that of the monolinguals, while also conveying distinct spectral characteristics of the point vowels in their two languages.
Collapse
Affiliation(s)
- D Kyle Danielson
- Department of Psychology, The University of British Columbia, 2136 West Mall, Vancouver, British Columbia V6T 1Z4, Canada
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University, Heavilon Hall, West Lafayette, Indiana 47907
| | - Kristine H Onishi
- Department of Psychology, McGill University, 1205 avenue du Docteur-Penfield, Montreal, Quebec H3A 1B1, Canada ,
| | - Golnoush Alamian
- Department of Psychology, McGill University, 1205 avenue du Docteur-Penfield, Montreal, Quebec H3A 1B1, Canada ,
| | - Alejandrina Cristia
- Laboratoire de Sciences Cognitives et Psycholinguistique, Centre National de la Recherche Scientifique, Pavillon Jardin, 29 rue d'Ulm, 75005 Paris, France
| |
Collapse
|
61
|
McMurray B, Kovack-Lesh KA, Goodwin D, McEchron W. Infant directed speech and the development of speech perception: enhancing development or an unintended consequence? Cognition 2013; 129:362-78. [PMID: 23973465 PMCID: PMC3874452 DOI: 10.1016/j.cognition.2013.07.015] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Revised: 07/18/2013] [Accepted: 07/22/2013] [Indexed: 11/30/2022]
Abstract
Infant directed speech (IDS) is a speech register characterized by simpler sentences, a slower rate, and more variable prosody. Recent work has implicated it in more subtle aspects of language development. Kuhl et al. (1997) demonstrated that segmental cues for vowels are affected by IDS in a way that may enhance development: the average locations of the extreme "point" vowels (/a/, /i/ and /u/) are further apart in acoustic space. If infants learn speech categories, in part, from the statistical distributions of such cues, these changes may specifically enhance speech category learning. We revisited this by asking (1) if these findings extend to a new cue (Voice Onset Time, a cue for voicing); (2) whether they extend to the interior vowels which are much harder to learn and/or discriminate; and (3) whether these changes may be an unintended phonetic consequence of factors like speaking rate or prosodic changes associated with IDS. Eighteen caregivers were recorded reading a picture book including minimal pairs for voicing (e.g., beach/peach) and a variety of vowels to either an adult or their infant. Acoustic measurements suggested that VOT was different in IDS, but not in a way that necessarily supports better development, and that these changes are almost entirely due to slower rate of speech of IDS. Measurements of the vowel suggested that in addition to changes in the mean, there was also an increase in variance, and statistical modeling suggests that this may counteract the benefit of any expansion of the vowel space. As a whole this suggests that changes in segmental cues associated with IDS may be an unintended by-product of the slower rate of speech and different prosodic structure, and do not necessarily derive from a motivation to enhance development.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychology, University of Iowa, United States; Dept. of Communication Sciences and Disorders, University of Iowa, United States; Dept. of Linguistics, University of Iowa, United States; The Delta Center, University of Iowa, United States.
| | | | | | | |
Collapse
|
62
|
Saint-Georges C, Chetouani M, Cassel R, Apicella F, Mahdhaoui A, Muratori F, Laznik MC, Cohen D. Motherese in interaction: at the cross-road of emotion and cognition? (A systematic review). PLoS One 2013; 8:e78103. [PMID: 24205112 PMCID: PMC3800080 DOI: 10.1371/journal.pone.0078103] [Citation(s) in RCA: 131] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2013] [Accepted: 09/06/2013] [Indexed: 11/17/2022] Open
Abstract
Various aspects of motherese also known as infant-directed speech (IDS) have been studied for many years. As it is a widespread phenomenon, it is suspected to play some important roles in infant development. Therefore, our purpose was to provide an update of the evidence accumulated by reviewing all of the empirical or experimental studies that have been published since 1966 on IDS driving factors and impacts. Two databases were screened and 144 relevant studies were retained. General linguistic and prosodic characteristics of IDS were found in a variety of languages, and IDS was not restricted to mothers. IDS varied with factors associated with the caregiver (e.g., cultural, psychological and physiological) and the infant (e.g., reactivity and interactive feedback). IDS promoted infants' affect, attention and language learning. Cognitive aspects of IDS have been widely studied whereas affective ones still need to be developed. However, during interactions, the following two observations were notable: (1) IDS prosody reflects emotional charges and meets infants' preferences, and (2) mother-infant contingency and synchrony are crucial for IDS production and prolongation. Thus, IDS is part of an interactive loop that may play an important role in infants' cognitive and social development.
Collapse
Affiliation(s)
- Catherine Saint-Georges
- Department of Child and Adolescent Psychiatry, Pitié-Salpêtrière Hospital, Université Pierre et Marie Curie, Paris, France
- Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique 7222, Université Pierre et Marie Curie, Paris, France
| | - Mohamed Chetouani
- Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique 7222, Université Pierre et Marie Curie, Paris, France
| | - Raquel Cassel
- Department of Child and Adolescent Psychiatry, Pitié-Salpêtrière Hospital, Université Pierre et Marie Curie, Paris, France
- Laboratoire de Psychopathologie et Processus de Santé (LPPS, EA 4057), Institut de Psychologie de l'Université Paris Descartes, Paris, France
| | - Fabio Apicella
- IRCCS Scientific Institute Stella Maris, University of Pisa, Pisa, Italy
| | - Ammar Mahdhaoui
- Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique 7222, Université Pierre et Marie Curie, Paris, France
| | - Filippo Muratori
- IRCCS Scientific Institute Stella Maris, University of Pisa, Pisa, Italy
| | - Marie-Christine Laznik
- Department of Child and Adolescent Psychiatry, Association Santé Mentale du 13ème, Centre Alfred Binet, Paris, France
| | - David Cohen
- Department of Child and Adolescent Psychiatry, Pitié-Salpêtrière Hospital, Université Pierre et Marie Curie, Paris, France
- Institut des Systèmes Intelligents et de Robotique, Centre National de la Recherche Scientifique 7222, Université Pierre et Marie Curie, Paris, France
| |
Collapse
|
63
|
Bosseler AN, Taulu S, Pihko E, Mäkelä JP, Imada T, Ahonen A, Kuhl PK. Theta brain rhythms index perceptual narrowing in infant speech perception. Front Psychol 2013; 4:690. [PMID: 24130536 PMCID: PMC3795304 DOI: 10.3389/fpsyg.2013.00690] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Accepted: 09/11/2013] [Indexed: 11/17/2022] Open
Abstract
The development of speech perception shows a dramatic transition between infancy and adulthood. Between 6 and 12 months, infants' initial ability to discriminate all phonetic units across the world's languages narrows-native discrimination increases while non-native discrimination shows a steep decline. We used magnetoencephalography (MEG) to examine whether brain oscillations in the theta band (4-8 Hz), reflecting increases in attention and cognitive effort, would provide a neural measure of the perceptual narrowing phenomenon in speech. Using an oddball paradigm, we varied speech stimuli in two dimensions, stimulus frequency (frequent vs. infrequent) and language (native vs. non-native speech syllables) and tested 6-month-old infants, 12-month-old infants, and adults. We hypothesized that 6-month-old infants would show increased relative theta power (RTP) for frequent syllables, regardless of their status as native or non-native syllables, reflecting young infants' attention and cognitive effort in response to highly frequent stimuli ("statistical learning"). In adults, we hypothesized increased RTP for non-native stimuli, regardless of their presentation frequency, reflecting increased cognitive effort for non-native phonetic categories. The 12-month-old infants were expected to show a pattern in transition, but one more similar to adults than to 6-month-old infants. The MEG brain rhythm results supported these hypotheses. We suggest that perceptual narrowing in speech perception is governed by an implicit learning process. This learning process involves an implicit shift in attention from frequent events (infants) to learned categories (adults). Theta brain oscillatory activity may provide an index of perceptual narrowing beyond speech, and would offer a test of whether the early speech learning process is governed by domain-general or domain-specific processes.
Collapse
Affiliation(s)
- Alexis N. Bosseler
- Institute for Learning & Brain Sciences, University of Washington, SeattleWA, USA
- Cognitive Brain Research Unit, University of HelsinkiHelsinki, Finland
| | | | - Elina Pihko
- Brain Research Unit, O.V. Lounasmaa Laboratory, School of Science, Aalto UniversityHelsinki, Finland
| | - Jyrki P. Mäkelä
- BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Central HospitalHelsinki, Finland
| | - Toshiaki Imada
- Institute for Learning & Brain Sciences, University of Washington, SeattleWA, USA
| | | | - Patricia K. Kuhl
- Institute for Learning & Brain Sciences, University of Washington, SeattleWA, USA
| |
Collapse
|
64
|
Friendly RH, Rendall D, Trainor LJ. Learning to differentiate individuals by their voices: Infants' individuation of native- and foreign-species voices. Dev Psychobiol 2013; 56:228-37. [DOI: 10.1002/dev.21164] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2013] [Accepted: 08/03/2013] [Indexed: 11/11/2022]
Affiliation(s)
- Rayna H. Friendly
- Department of Psychology; Neuroscience and Behaviour; McMaster University; 1280 Main Street West Hamilton Ontario Canada L8S 4L8
| | - Drew Rendall
- Department of Psychology; University of Lethbridge; 4401 University Drive Lethbridge Alberta Canada T1K 3M4
| | - Laurel J. Trainor
- Department of Psychology; Neuroscience and Behaviour; McMaster University; 1280 Main Street West Hamilton Ontario Canada L8S 4L8
- Rotman Research Institute; Baycrest Centre; 3560 Bathurst Street Toronto Ontario Canada M6A 2E1
| |
Collapse
|
65
|
Robertson S, von Hapsburg D, Hay JS. The effect of hearing loss on the perception of infant- and adult-directed speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1108-1119. [PMID: 23798510 DOI: 10.1044/1092-4388(2012/12-0110)] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE Infant-directed speech (IDS) facilitates language learning in infants with normal hearing, compared to adult-directed speech (ADS). It is well established that infants with normal hearing prefer to listen to IDS over ADS. The purpose of this study was to determine whether infants with hearing impairment (HI), like their NH peers, show a listening preference for IDS over ADS. METHOD A total of 36 infants-9 HI infants (mean chronological age of 19.1 with mean listening age of 7.7 months), 9 NH infants with similar average listening age (7.8 months), and 9 NH infants with similar average chronological age (18.6 months)-were tested on their listening preference for IDS compared with ADS using the central fixation preference procedure. RESULTS Infants with HI significantly preferred listening to IDS over ADS. The preference for IDS was also seen in the younger NH infants, but not older NH controls. Additionally, HI infants showed shorter overall looking times as compared to either NH group. CONCLUSION Although infants with hearing loss displayed a shorter looking time to speech compared to NH controls, HI infants nonetheless appear to have sufficient access to the speech signal to display a developmentally appropriate preference for IDS over ADS.
Collapse
|
66
|
Learning phonemic vowel length from naturalistic recordings of Japanese infant-directed speech. PLoS One 2013; 8:e51594. [PMID: 23437036 PMCID: PMC3577837 DOI: 10.1371/journal.pone.0051594] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2012] [Accepted: 11/07/2012] [Indexed: 11/19/2022] Open
Abstract
In Japanese, vowel duration can distinguish the meaning of words. In order for infants to learn this phonemic contrast using simple distributional analyses, there should be reliable differences in the duration of short and long vowels, and the frequency distribution of vowels must make these differences salient enough in the input. In this study, we evaluate these requirements of phonemic learning by analyzing the duration of vowels from over 11 hours of Japanese infant-directed speech. We found that long vowels are substantially longer than short vowels in the input directed to infants, for each of the five oral vowels. However, we also found that learning phonemic length from the overall distribution of vowel duration is not going to be easy for a simple distributional learner, because of the large base-rate effect (i.e., 94% of vowels are short), and because of the many factors that influence vowel duration (e.g., intonational phrase boundaries, word boundaries, and vowel height). Therefore, a successful learner would need to take into account additional factors such as prosodic and lexical cues in order to discover that duration can contrast the meaning of words in Japanese. These findings highlight the importance of taking into account the naturalistic distributions of lexicons and acoustic cues when modeling early phonemic learning.
Collapse
|
67
|
A connectionist model of category learning by individuals with high-functioning autism spectrum disorder. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2013; 13:371-89. [DOI: 10.3758/s13415-012-0148-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
68
|
Julien HM, Munson B. Modifying speech to children based on their perceived phonetic accuracy. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:1836-49. [PMID: 22744140 PMCID: PMC3929121 DOI: 10.1044/1092-4388(2012/11-0131)] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
PURPOSE The authors examined the relationship between adults' perception of the accuracy of children's speech and acoustic detail in their subsequent productions to children. METHOD Twenty-two adults participated in a task in which they rated the accuracy of 2- and 3-year-old children's word-initial /s/ and /∫/ using a visual analog scale (VAS), then produced a token of the same word as if they were responding to the child whose speech they had just rated. Result The duration of adults' fricatives varied as a function of their perception of the accuracy of children's speech: Longer fricatives were produced following productions that they rated as inaccurate. This tendency to modify duration in response to perceived inaccurate tokens was mediated by measures of self-reported experience interacting with children. However, speakers did not increase the spectral distinctiveness of their fricatives following the perception of inaccurate tokens. CONCLUSION These results suggest that adults modify temporal features of their speech in response to perceiving children's inaccurate productions. These longer fricatives are potentially both enhanced input to children and an error-corrective signal.
Collapse
|
69
|
Dillon B, Dunbar E, Idsardi W. A single-stage approach to learning phonological categories: insights from Inuktitut. Cogn Sci 2012; 37:344-77. [PMID: 23137418 DOI: 10.1111/cogs.12008] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
To acquire one's native phonological system, language-specific phonological categories and relationships must be extracted from the input. The acquisition of the categories and relationships has each in its own right been the focus of intense research. However, it is remarkable that research on the acquisition of categories and the relations between them has proceeded, for the most part, independently of one another. We argue that this has led to the implicit view that phonological acquisition is a "two-stage" process: Phonetic categories are first acquired and then subsequently mapped onto abstract phoneme categories. We present simulations that suggest two problems with this view: First, the learner might mistake the phoneme-level categories for phonetic-level categories and thus be unable to learn the relationships between phonetic-level categories; on the other hand, the learner might construct inaccurate phonetic-level representations that prevent it from finding regular relations among them. We suggest an alternative conception of the phonological acquisition problem that sidesteps this apparent inevitability and acquires phonemic categories in a single stage. Using acoustic data from Inuktitut, we show that this model reliably converges on a set of phoneme-level categories and phonetic-level relations among subcategories, without making use of a lexicon.
Collapse
Affiliation(s)
- Brian Dillon
- Department of Linguistics, University of Massachusetts, MA, USA
| | | | | |
Collapse
|
70
|
|
71
|
Kondaurova MV, Bergeson TR, Dilley LC. Effects of deafness on acoustic characteristics of American English tense/lax vowels in maternal speech to infants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:1039-49. [PMID: 22894224 PMCID: PMC3427367 DOI: 10.1121/1.4728169] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2011] [Revised: 05/12/2012] [Accepted: 05/21/2012] [Indexed: 05/23/2023]
Abstract
Recent studies have demonstrated that mothers exaggerate phonetic properties of infant-directed (ID) speech. However, these studies focused on a single acoustic dimension (frequency), whereas speech sounds are composed of multiple acoustic cues. Moreover, little is known about how mothers adjust phonetic properties of speech to children with hearing loss. This study examined mothers' production of frequency and duration cues to the American English tense/lax vowel contrast in speech to profoundly deaf (N = 14) and normal-hearing (N = 14) infants, and to an adult experimenter. First and second formant frequencies and vowel duration of tense (/i/, /u/) and lax (/I/, /ʊ/) vowels were measured. Results demonstrated that for both infant groups mothers hyperarticulated the acoustic vowel space and increased vowel duration in ID speech relative to adult-directed speech. Mean F2 values were decreased for the /u/ vowel and increased for the /I/ vowel, and vowel duration was longer for the /i/, /u/, and /I/ vowels in ID speech. However, neither acoustic cue differed in speech to hearing-impaired or normal-hearing infants. These results suggest that both formant frequencies and vowel duration that differentiate American English tense/lx vowel contrasts are modified in ID speech regardless of the hearing status of the addressee.
Collapse
Affiliation(s)
- Maria V Kondaurova
- Department of Otolaryngology-Head & Neck Surgery, Indiana University School of Medicine, 699 Riley Hospital Drive-RR044, Indianapolis, Indiana 46202, USA.
| | | | | |
Collapse
|
72
|
Watson LR, Roberts JE, Baranek GT, Mandulak KC, Dalton JC. Behavioral and physiological responses to child-directed speech of children with autism spectrum disorders or typical development. J Autism Dev Disord 2012; 42:1616-29. [PMID: 22071788 PMCID: PMC3402684 DOI: 10.1007/s10803-011-1401-z] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
Young boys with autism were compared to typically developing boys on responses to nonsocial and child-directed speech (CDS) stimuli. Behavioral (looking) and physiological (heart rate and respiratory sinus arrhythmia) measures were collected. Boys with autism looked equally as much as chronological age-matched peers at nonsocial stimuli, but less at CDS stimuli. Boys with autism and language age-matched peers differed in patterns of looking at live versus videotaped CDS stimuli. Boys with autism demonstrated faster heart rates than chronological age-matched peers, but did not differ significantly on respiratory sinus arrhythmia. Reduced attention during CDS may restrict language-learning opportunities for children with autism. The heart rate findings suggest that young children with autism have a nonspecific elevated arousal level.
Collapse
Affiliation(s)
- Linda R Watson
- Division of Speech and Hearing Sciences, CB# 7190, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7190, USA.
| | | | | | | | | |
Collapse
|
73
|
Werker JF, Yeung HH, Yoshida KA. How Do Infants Become Experts at Native-Speech Perception? CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2012. [DOI: 10.1177/0963721412449459] [Citation(s) in RCA: 99] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Infants begin life ready to learn any of the world’s languages, but they quickly become speech-perception experts in their native language. Although this phenomenon has been well described, the mechanisms leading to native-language-listening expertise have not. In this article, we provide an in-depth review of one learning mechanism: distributional learning (DL), which has been shown to be important in phonetic category learning. DL is a domain-general statistical learning mechanism that involves tracking the relative frequency of phonetic tokens in speech input. Although DL is powerful, recent research has identified limitations to it as well. We conclude with a discussion of possible supplementary phonetic-learning mechanisms, which focuses on the surrounding context in which infants hear phonetic tokens and how it can augment DL and highlight important linguistic differences between perceptually similar stimuli.
Collapse
Affiliation(s)
| | - H. Henny Yeung
- Université Paris Descartes, Sorbonne Paris Cité
- Centre National de la Recherche Scientifique, Paris, France
| | | |
Collapse
|
74
|
Pons F, Albareda-Castellot B, Sebastián-Gallés N. The interplay between input and initial biases: asymmetries in vowel perception during the first year of life. Child Dev 2012; 83:965-76. [PMID: 22364434 DOI: 10.1111/j.1467-8624.2012.01740.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Vowels with extreme articulatory-acoustic properties act as natural referents. Infant perceptual asymmetries point to an underlying bias favoring these referent vowels. However, as language experience is gathered, distributional frequency of speech sounds could modify this initial bias. The perception of the /i/-/e/ contrast was explored in 144 Catalan- and Spanish-learning infants (2 languages with a different distribution of vowel frequency of occurrence) at 4, 6, and 12 months. The results confirmed an acoustic bias at 4 and 6 months in all infants. However, at 12 months, discrimination was not affected by the acoustic bias but by the frequency of occurrence of the vowel.
Collapse
Affiliation(s)
- Ferran Pons
- Institute for Brain, Cognition and Behavior (IR3C), and Universitat de Barcelona, Departament de Psicologia Bàsica, Facultat de Psicologia, Barcelona, Spain.
| | | | | |
Collapse
|
75
|
Abstract
Infants are adept at tracking statistical regularities to identify word boundaries in pause-free speech. However, researchers have questioned the relevance of statistical learning mechanisms to language acquisition, since previous studies have used simplified artificial languages that ignore the variability of real language input. The experiments reported here embraced a key dimension of variability in infant-directed speech. English-learning infants (8-10 months) listened briefly to natural Italian speech that contained either fluent speech only or a combination of fluent speech and single-word utterances. Listening times revealed successful learning of the statistical properties of target words only when words appeared both in fluent speech and in isolation; brief exposure to fluent speech alone was not sufficient to facilitate detection of the words' statistical properties. This investigation suggests that statistical learning mechanisms actually benefit from variability in utterance length, and provides the first evidence that isolated words and longer utterances act in concert to support infant word segmentation.
Collapse
Affiliation(s)
- Casey Lew-Williams
- Department of Psychology and Waisman Center, University of Wisconsin-Madison, WI 53705-2280, USA.
| | | | | |
Collapse
|
76
|
Kuhl PK. Early Language Learning and Literacy: Neuroscience Implications for Education. MIND, BRAIN AND EDUCATION : THE OFFICIAL JOURNAL OF THE INTERNATIONAL MIND, BRAIN, AND EDUCATION SOCIETY 2011; 5:128-142. [PMID: 21892359 PMCID: PMC3164118 DOI: 10.1111/j.1751-228x.2011.01121.x] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The last decade has produced an explosion in neuroscience research examining young children's early processing of language that has implications for education. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. In the arena of language, the neural signatures of learning can be documented at a remarkably early point in development, and these early measures predict performance in children's language and pre-reading abilities in the second, third, and fifth year of life, a finding with theoretical and educational import. There is evidence that children's early mastery of language requires learning in a social context, and this finding also has important implications for education. Evidence relating socio-economic status (SES) to brain function for language suggests that SES should be considered a proxy for the opportunity to learn and that the complexity of language input is a significant factor in developing brain areas related to language. The data indicate that the opportunity to learn from complex stimuli and events are vital early in life, and that success in school begins in infancy.
Collapse
Affiliation(s)
- Patricia K Kuhl
- Institute for Learning & Brain Sciences, University of Washington
| |
Collapse
|
77
|
Ma W, Golinkoff RM, Houston D, Hirsh-Pasek K. Word Learning in Infant- and Adult-Directed Speech. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2011; 7:185-201. [PMID: 29129970 PMCID: PMC5679190 DOI: 10.1080/15475441.2011.579839] [Citation(s) in RCA: 132] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Infant-directed speech (IDS), compared with adult-directed speech (ADS), is characterized by a slower rate, a higher fundamental frequency, greater pitch variations, longer pauses, repetitive intonational structures, and shorter sentences. Despite studies on the properties of IDS, there is no direct demonstration of its effects for word learning in infants. This study examined whether 21- and 27-month-old children learned novel words better in IDS than in ADS. Two major findings emerged. First, 21-month-olds reliably learned words only in the IDS condition, although children with relatively larger vocabulary than their peers learned in the ADS condition as well. Second, 27-month-olds reliably learned the words in the ADS condition. These results support the implicitly held assumption that IDS does in fact facilitate word mapping at the start of lexical acquisition and that its influence wanes as language development proceeds.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Foreign Languages, Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China
| | - Roberta Michnick Golinkoff
- School of Education, and Departments of Psychology and Linguistic and Cognitive Science, University of Delaware
| | | | | |
Collapse
|
78
|
Inoue T, Nakagawa R, Kondou M, Koga T, Shinohara K. Discrimination between mothers’ infant- and adult-directed speech using hidden Markov models. Neurosci Res 2011; 70:62-70. [DOI: 10.1016/j.neures.2011.01.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2010] [Revised: 11/30/2010] [Accepted: 01/09/2011] [Indexed: 11/28/2022]
|
79
|
McMurray B, Jongman A. What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations. Psychol Rev 2011; 118:219-46. [PMID: 21417542 PMCID: PMC3523696 DOI: 10.1037/a0022325] [Citation(s) in RCA: 141] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2,880 fricative productions (Jongman, Wayland, & Wong, 2000) spanning many talker and vowel contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values and manipulated the information in the training set to contrast (a) models based on a small number of invariant cues, (b) models using all cues without compensation, and (c) models in which cues underwent compensation for contextual factors. Compensation was modeled by computing cues relative to expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Psychology, University of Iowa, Iowa City, IA 52240, USA.
| | | |
Collapse
|
80
|
Zhang Y, Koerner T, Miller S, Grice-Patil Z, Svec A, Akbari D, Tusler L, Carney E. Neural coding of formant-exaggerated speech in the infant brain. Dev Sci 2010; 14:566-81. [PMID: 21477195 DOI: 10.1111/j.1467-7687.2010.01004.x] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA.
| | | | | | | | | | | | | | | |
Collapse
|
81
|
Kondaurova MV, Francis AL. The role of selective attention in the acquisition of English tense and lax vowels by native Spanish listeners: comparison of three training methods. JOURNAL OF PHONETICS 2010; 38:569-587. [PMID: 21499531 PMCID: PMC3076995 DOI: 10.1016/j.wocn.2010.08.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
This study investigates the role of two processes, cue enhancement (learning to attend to acoustic cues which characterize a speech contrast for native listeners) and cue inhibition (learning to ignore cues that do not), in the acquisition of the American English tense and lax ([i] vs.[I]) vowels by native Spanish listeners. This contrast is acoustically distinguished by both vowel spectrum and duration. However, while native English listeners rely primarily on spectrum, inexperienced Spanish listeners tend to rely exclusively on duration. Twenty-nine native Spanish listeners, initially reliant on vowel duration, received either enhancement training, inhibition training, or training with a natural cue distribution. Results demonstrated that reliance on spectrum properties increased over baseline for all three groups. However, inhibitory training was more effective relative to enhancement training and both inhibitory and enhancement training were more effective relative to natural distribution training in decreasing listeners' attention to duration. These results suggest that phonetic learning may involve two distinct cognitive processes, cue enhancement and cue inhibition, that function to shift selective attention between separable acoustic dimensions. Moreover, cue-specific training (whether enhancing or inhibitory) appears to be more effective for the acquisition of second language speech contrasts.
Collapse
Affiliation(s)
| | - Alexander L. Francis
- Program in Linguistics, Purdue University, West Lafayette, IN 47906, USA
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47906, USA
| |
Collapse
|
82
|
Abstract
The last decade has produced an explosion in neuroscience research examining young children's early processing of language. Noninvasive, safe functional brain measurements have now been proven feasible for use with children starting at birth. The phonetic level of language is especially accessible to experimental studies that document the innate state and the effect of learning on the brain. The neural signatures of learning at the phonetic level can be documented at a remarkably early point in development. Continuity in linguistic development from infants' earliest brain responses to phonetic stimuli is reflected in their language and prereading abilities in the second, third, and fifth year of life, a finding with theoretical and clinical impact. There is evidence that early mastery of the phonetic units of language requires learning in a social context. Neuroscience on early language learning is beginning to reveal the multiple brain systems that underlie the human language faculty.
Collapse
Affiliation(s)
- Patricia K Kuhl
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
83
|
Watson LR, Baranek GT, Roberts JE, David FJ, Perryman TY. Behavioral and physiological responses to child-directed speech as predictors of communication outcomes in children with autism spectrum disorders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2010; 53:1052-64. [PMID: 20631229 PMCID: PMC3192008 DOI: 10.1044/1092-4388(2009/09-0096)] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE To determine the extent to which behavioral and physiological responses during child-directed speech (CDS) correlate concurrently and predictively with communication skills in young children with autism spectrum disorders (ASD). METHOD Twenty-two boys with ASD (initial mean age: 35 months) participated in a longitudinal study. At entry, behavioral (i.e., percentage looking) and physiological (i.e., vagal activity) measures were collected during the presentation of CDS stimuli. A battery of standardized communication measures was administered at entry and readministered 12 months later. RESULTS Percentage looking during CDS was strongly correlated with all entry and follow-up communication scores; vagal activity during CDS was moderately to strongly correlated with entry receptive language, follow-up expressive language, and social-communicative adaptive skills. After controlling for entry communication skills, vagal activity during CDS accounted for significant variance in follow-up communication skills, but percentage looking during CDS did not. CONCLUSIONS Behavioral and physiological responses to CDS are significantly related to concurrent and later communication skills of children with ASD. Furthermore, higher vagal activity during CDS predicts better communication outcomes 12 months later, after initial communication skills are accounted for. Further research is needed to better understand the physiological mechanisms underlying variable responses to CDS among children with ASD.
Collapse
Affiliation(s)
- Linda R Watson
- Division of Speech and Hearing Sciences, CB 7190, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7190, USA.
| | | | | | | | | |
Collapse
|
84
|
Song JY, Demuth K, Morgan J. Effects of the acoustic properties of infant-directed speech on infant word recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:389-400. [PMID: 20649233 PMCID: PMC2921436 DOI: 10.1121/1.3419786] [Citation(s) in RCA: 91] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 04/07/2010] [Accepted: 04/08/2010] [Indexed: 05/23/2023]
Abstract
A number of studies have examined the acoustic differences between infant-directed speech (IDS) and adult-directed speech, suggesting that the exaggerated acoustic properties of IDS might facilitate infants' language development. However, there has been little empirical investigation of the acoustic properties that infants use for word learning. The goal of this study was thus to examine how 19-month-olds' word recognition is affected by three acoustic properties of IDS: slow speaking rate, vowel hyper-articulation, and wide pitch range. Using the intermodal preferential looking procedure, infants were exposed to half of the test stimuli (e.g., Where's the book?) in typical IDS style. The other half of the stimuli were digitally altered to remove one of the three properties under investigation. After the target word (e.g., book) was spoken, infants' gaze toward target and distractor referents was measured frame by frame to examine the time course of word recognition. The results showed that slow speaking rate and vowel hyper-articulation significantly improved infants' ability to recognize words, whereas wide pitch range did not. These findings suggest that 19-month-olds' word recognition may be affected only by the linguistically relevant acoustic properties in IDS.
Collapse
Affiliation(s)
- Jae Yung Song
- Department of Cognitive and Linguistic Sciences, Brown University, Providence, Rhode Island 02912, USA.
| | | | | |
Collapse
|
85
|
Cristià A. Phonetic enhancement of sibilants in infant-directed speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:424-34. [PMID: 20649236 PMCID: PMC3188599 DOI: 10.1121/1.3436529] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2009] [Revised: 04/28/2010] [Accepted: 04/28/2010] [Indexed: 05/23/2023]
Abstract
The hypothesis that vocalic categories are enhanced in infant-directed speech (IDS) has received a great deal of attention and support. In contrast, work focusing on the acoustic implementation of consonantal categories has been scarce, and positive, negative, and null results have been reported. However, interpreting this mixed evidence is complicated by the facts that the definition of phonetic enhancement varies across articles, that small and heterogeneous groups have been studied across experiments, and further that the categories chosen are likely affected by other characteristics of IDS. Here, an analysis of the English sibilants /s/ and /[see text]/ in a large corpus of caregivers' speech to another adult and to their infant suggests that consonantal categories are indeed enhanced, even after controlling for typical IDS prosodic characteristics.
Collapse
Affiliation(s)
- Alejandrina Cristià
- Laboratoire de Sciences Cognitives et Psycholinguistique, EHESS-DEC-ENS-CNRS, Paris 75005, France.
| |
Collapse
|
86
|
Fais L, Kajikawa S, Amano S, Werker JF. Now you hear it, now you don't: vowel devoicing in Japanese infant-directed speech. JOURNAL OF CHILD LANGUAGE 2010; 37:319-340. [PMID: 19490747 DOI: 10.1017/s0305000909009556] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically devoicing particular vowels, yielding surface consonant clusters. We measured vowel devoicing rates in a corpus of infant- and adult-directed Japanese speech, for both read and spontaneous speech, and found that the mothers in our study preserve the fluent adult form of the language and mask underlying phonological structure by devoicing vowels in infant-directed speech at virtually the same rates as those for adult-directed speech. The results highlight the complex interrelationships among the modifications to adult speech that comprise infant-directed speech, and that form the input from which infants begin to build the eventual mature form of their native language.
Collapse
Affiliation(s)
- Laurel Fais
- Department of Psychology, University of British Columbia, Vancouver, Canada.
| | | | | | | |
Collapse
|
87
|
Schirmer A. Mark my words: tone of voice changes affective word representations in memory. PLoS One 2010; 5:e9080. [PMID: 20169154 PMCID: PMC2821399 DOI: 10.1371/journal.pone.0009080] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2009] [Accepted: 01/04/2010] [Indexed: 11/19/2022] Open
Abstract
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Psychology, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
88
|
Abstract
Infant phonetic perception reorganizes in accordance with the native language by 10 months of age. One mechanism that may underlie this perceptual change is distributional learning, a statistical analysis of the distributional frequency of speech sounds. Previous distributional learning studies have tested infants of 6-8 months, an age at which native phonetic categories have not yet developed. Here, three experiments test infants of 10 months to help illuminate perceptual ability following perceptual reorganization. English-learning infants did not change discrimination in response to nonnative speech sound distributions from either a voicing distinction (Experiment 1) or a place-of-articulation distinction (Experiment 2). In Experiment 3, familiarization to the place-of-articulation distinction was doubled to increase the amount of exposure, and in this case infants began discriminating the sounds. These results extend the processes of distributional learning to a new phonetic contrast, and reveal that at 10 months of age, distributional phonetic learning remains effective, but is more difficult than before perceptual reorganization.
Collapse
Affiliation(s)
| | - Ferran Pons
- Departament de Psicologia Bàsica Universitat de Barcelona
| | - Jessica Maye
- Department of Communication Sciences and Disorders Northwestern University
| | - Janet F Werker
- Department of Psychology The University of British Columbia
| |
Collapse
|
89
|
Abstract
Infants learn the forms of words by listening to the speech they hear. Though little is known about the degree to which these forms are meaningful for young infants, the words still play a role in early language development. Words guide the infant to his or her first syntactic intuitions, aid in the development of the lexicon, and, it is proposed, may help infants learn phonetic categories.
Collapse
Affiliation(s)
- Daniel Swingley
- Department of Psychology, University of Pennsylvania, 3401 Walnut Street 302C, Philadelphia, PA 19104, USA.
| |
Collapse
|
90
|
Yeung HH, Werker JF. Learning words’ sounds before learning how words sound: 9-Month-olds use distinct objects as cues to categorize speech information. Cognition 2009; 113:234-43. [PMID: 19765698 DOI: 10.1016/j.cognition.2009.08.010] [Citation(s) in RCA: 147] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2008] [Revised: 04/20/2009] [Accepted: 08/13/2009] [Indexed: 11/17/2022]
Affiliation(s)
- H Henny Yeung
- Department of Psychology, The University of British Columbia, Vancouver, British Columbia, Canada V6T 1Z4.
| | | |
Collapse
|
91
|
Singh L, Nestor S, Parikh C, Yull A. Influences of Infant-Directed Speech on Early Word Recognition. INFANCY 2009; 14:654-666. [PMID: 32693515 DOI: 10.1080/15250000903263973] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
When addressing infants, many adults adopt a particular type of speech, known as infant-directed speech (IDS). IDS is characterized by exaggerated intonation, as well as reduced speech rate, shorter utterance duration, and grammatical simplification. It is commonly asserted that IDS serves in part to facilitate language learning. Although intuitively appealing, direct empirical tests of this claim are surprisingly scarce. Additionally, studies that have examined associations between IDS and language learning have measured learning within a single laboratory session rather than the type of long-term storage of information necessary for word learning. In this study, 7- and 8-month-old infants' long-term memory for words was assessed when words were spoken in IDS and adult-directed speech (ADS). Word recognition over the long term was successful for words introduced in IDS, but not for those introduced in ADS, regardless of the register in which recognition stimuli were produced. Findings are discussed in the context of the influence of particular input styles on emergent word knowledge in prelexical infants.
Collapse
Affiliation(s)
- Leher Singh
- Department of Speech, Language, and Hearing Sciences Boston University
| | | | | | | |
Collapse
|
92
|
Zhang Y, Kuhl PK, Imada T, Iverson P, Pruitt J, Stevens EB, Kawakatsu M, Tohkura Y, Nemoto I. Neural signatures of phonetic learning in adulthood: a magnetoencephalography study. Neuroimage 2009; 46:226-40. [PMID: 19457395 PMCID: PMC2811417 DOI: 10.1016/j.neuroimage.2009.01.028] [Citation(s) in RCA: 78] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2008] [Revised: 11/19/2008] [Accepted: 01/18/2009] [Indexed: 10/21/2022] Open
Abstract
The present study used magnetoencephalography (MEG) to examine perceptual learning of American English /r/ and /l/ categories by Japanese adults who had limited English exposure. A training software program was developed based on the principles of infant phonetic learning, featuring systematic acoustic exaggeration, multi-talker variability, visible articulation, and adaptive listening. The program was designed to help Japanese listeners utilize an acoustic dimension relevant for phonemic categorization of /r-l/ in English. Although training did not produce native-like phonetic boundary along the /r-l/ synthetic continuum in the second language learners, success was seen in highly significant identification improvement over twelve training sessions and transfer of learning to novel stimuli. Consistent with behavioral results, pre-post MEG measures showed not only enhanced neural sensitivity to the /r-l/ distinction in the left-hemisphere mismatch field (MMF) response but also bilateral decreases in equivalent current dipole (ECD) cluster and duration measures for stimulus coding in the inferior parietal region. The learning-induced increases in neural sensitivity and efficiency were also found in distributed source analysis using Minimum Current Estimates (MCE). Furthermore, the pre-post changes exhibited significant brain-behavior correlations between speech discrimination scores and MMF amplitudes as well as between the behavioral scores and ECD measures of neural efficiency. Together, the data provide corroborating evidence that substantial neural plasticity for second-language learning in adulthood can be induced with adaptive and enriched linguistic exposure. Like the MMF, the ECD cluster and duration measures are sensitive neural markers of phonetic learning.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
93
|
Abstract
Can infants, in the very first stages of word learning, use their perceptual sensitivity to the phonetics of speech while learning words? Research to date suggests that infants of 14 months cannot learn two similar-sounding words unless there is substantial contextual support. The current experiment advances our understanding of this failure by testing whether the source of infants' difficulty lies in the learning or testing phase. Infants were taught to associate two similar-sounding words with two different objects, and tested using a visual choice method rather than the standard Switch task. The results reveal that 14-month-olds are capable of learning and mapping two similar-sounding labels; they can apply phonetic detail in new words. The findings are discussed in relation to infants' concurrent failure, and the developmental transition to success, in the Switch task.
Collapse
|
94
|
Abstract
Previous tests of toddlers' phonological knowledge of familiar words using word recognition tasks have examined syllable onsets but not word-final consonants (codas). However, there are good reasons to suppose that children's knowledge of coda consonants might be less complete than their knowledge of onset consonants. To test this hypothesis, the present study examined 14- to 21-month-old children's knowledge of the phonological forms of familiar words by measuring their comprehension of correctly-pronounced and mispronounced instances of those words using a visual fixation task. Mispronunciations substituted onset or coda consonants. Adults were tested in the same task for comparison with children. Children and adults fixated named targets more upon hearing correct pronunciations than upon hearing mispronunciations, whether those mispronunciations involved the word's initial or final consonant. In addition, detailed analysis of the timing of adults' and children's eye movements provided clear evidence for incremental interpretation of the speech signal. Children's responses were slower and less accurate overall, but children and adults showed nearly identical temporal effects of the placement of phonological substitutions. The results demonstrate accurate encoding of consonants even in words children cannot yet say.
Collapse
Affiliation(s)
- Daniel Swingley
- Department of Psychology, University of Pennsylvania, 3401 Walnut St. 302C, Philadelphia, PA 19104
| |
Collapse
|
95
|
Teinonen T, Aslin RN, Alku P, Csibra G. Visual speech contributes to phonetic learning in 6-month-old infants. Cognition 2008; 108:850-5. [DOI: 10.1016/j.cognition.2008.05.009] [Citation(s) in RCA: 184] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2007] [Revised: 05/15/2008] [Accepted: 05/19/2008] [Indexed: 11/29/2022]
|
96
|
|
97
|
White KS, Peperkamp S, Kirk C, Morgan JL. Rapid acquisition of phonological alternations by infants. Cognition 2008; 107:238-65. [PMID: 18191826 PMCID: PMC2941201 DOI: 10.1016/j.cognition.2007.11.012] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2006] [Revised: 11/26/2007] [Accepted: 11/30/2007] [Indexed: 10/22/2022]
Abstract
We explore whether infants can learn novel phonological alternations on the basis of distributional information. In Experiment 1, two groups of 12-month-old infants were familiarized with artificial languages whose distributional properties exhibited either stop or fricative voicing alternations. At test, infants in the two exposure groups had different preferences for novel sequences involving voiced and voiceless stops and fricatives, suggesting that each group had internalized a different familiarization alternation. In Experiment 2, 8.5-month-olds exhibited the same patterns of preference. In Experiments 3 and 4, we investigated whether infants' preferences were driven solely by preferences for sequences of high transitional probability. Although 8.5-month-olds in Experiment 3 were sensitive to the relative probabilities of sequences in the familiarization stimuli, only 12-month-olds in Experiment 4 showed evidence of having grouped alternating segments into a single functional category. Taken together, these results suggest a developmental trajectory for the acquisition of phonological alternations using distributional cues in the input.
Collapse
|
98
|
Bilingualism in infancy: first steps in perception and comprehension. Trends Cogn Sci 2008; 12:144-51. [PMID: 18343711 DOI: 10.1016/j.tics.2008.01.008] [Citation(s) in RCA: 108] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2007] [Revised: 01/29/2008] [Accepted: 01/30/2008] [Indexed: 11/23/2022]
Abstract
Many children grow up in bilingual families and acquire two first languages. Emerging research is advancing the view that the capacity to acquire language can be applied equally to two languages as to one but that bilingual and monolingual acquisition nonetheless differ in some nontrivial ways. To probe the first steps toward acquisition, researchers recently have begun to use experimental methods to study preverbal bilingual infants. We review the literature in this growing field, focusing on how infants growing up bilingual use surface acoustic information to separate, categorize and begin to learn their two languages. These new data invite the expansion of standard linguistic theories to account for how a single architecture can support the acquisition of two languages simultaneously.
Collapse
|
99
|
Kuhl PK, Conboy BT, Coffey-Corina S, Padden D, Rivera-Gaxiola M, Nelson T. Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e). Philos Trans R Soc Lond B Biol Sci 2008; 363:979-1000. [PMID: 17846016 PMCID: PMC2606791 DOI: 10.1098/rstb.2007.2154] [Citation(s) in RCA: 325] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Infants' speech perception skills show a dual change towards the end of the first year of life. Not only does non-native speech perception decline, as often shown, but native language speech perception skills show improvement, reflecting a facilitative effect of experience with native language. The mechanism underlying change at this point in development, and the relationship between the change in native and non-native speech perception, is of theoretical interest. As shown in new data presented here, at the cusp of this developmental change, infants' native and non-native phonetic perception skills predict later language ability, but in opposite directions. Better native language skill at 7.5 months of age predicts faster language advancement, whereas better non-native language skill predicts slower advancement. We suggest that native language phonetic performance is indicative of neural commitment to the native language, while non-native phonetic performance reveals uncommitted neural circuitry. This paper has three goals: (i) to review existing models of phonetic perception development, (ii) to present new event-related potential data showing that native and non-native phonetic perception at 7.5 months of age predicts language growth over the next 2 years, and (iii) to describe a revised version of our previous model, the native language magnet model, expanded (NLM-e). NLM-e incorporates five new principles. Specific testable predictions for future research programmes are described.
Collapse
Affiliation(s)
- Patricia K Kuhl
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA 98195, USA.
| | | | | | | | | | | |
Collapse
|
100
|
Beyond babytalk: Re-evaluating the nature and content of speech input to preverbal infants. DEVELOPMENTAL REVIEW 2007. [DOI: 10.1016/j.dr.2007.06.002] [Citation(s) in RCA: 221] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|