1
|
Creel SC, Frye CI. Minimal gains for minimal pairs: Difficulty in learning similar-sounding words continues into preschool. J Exp Child Psychol 2024; 240:105831. [PMID: 38134601 DOI: 10.1016/j.jecp.2023.105831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/21/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023]
Abstract
A critical indicator of spoken language knowledge is the ability to discern the finest possible distinctions that exist between words in a language-minimal pairs, for example, the distinction between the novel words beesh and peesh. Infants differentiate similar-sounding novel labels like "bih" and "dih" by 17 months of age or earlier in the context of word learning. Adult word learners readily distinguish similar-sounding words. What is unclear is the shape of learning between infancy and adulthood: Is there a nonlinear increase early in development, or is there protracted improvement as experience with spoken language amasses? Three experiments tested monolingual English-speaking children aged 3 to 6 years and young adults. Children underperformed when learning minimal-pair words compared with adults (Experiment 1), compared with learning dissimilar words even when speech materials were optimized for young children (Experiment 2), and when the number of word instances during learning was quadrupled (Experiment 3). Nonetheless, the youngest group readily recognized familiar minimal pairs (Experiment 3). Results are consistent with a lengthy trajectory for detailed sound pattern learning in one's native language(s), although other interpretations are possible. Suggestions for research on developmental trajectories across various age ranges are made.
Collapse
Affiliation(s)
- Sarah C Creel
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA.
| | - Conor I Frye
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
2
|
Lador-Weizman Y, Deutsch A. The influence of language-specific properties on the role of consonants and vowels in a statistical learning task of an artificial language: A cross-linguistic comparison. Q J Exp Psychol (Hove) 2024:17470218241229721. [PMID: 38262925 DOI: 10.1177/17470218241229721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
The contribution of consonants and vowels in spoken word processing has been widely investigated, and studies have found a phenomenon of a Consonantal bias (C-bias), indicating that consonants carry more weight than vowels. However, across languages, various patterns have been documented, including that of no preference or a reverse pattern of Vowel bias. A central question is how the manifestation of the C-bias is modulated by language-specific factors. This question can be addressed by cross-linguistic studies. Comparing native Hebrew and native English speakers, this study examines the relative importance of transitional probabilities between non-adjacent consonants as opposed to vowels during auditory statistical learning (SL) of an artificial language. Hebrew is interesting because its complex Semitic morphological structure has been found to play a central role in lexical access, allowing us to examine whether morphological properties can modulate the C-bias in early phases of speech perception, namely, word segmentation. As predicted, we found a significant interaction between language and consonant/vowel manipulation, with a higher performance in the consonantal condition than in the vowel condition for Hebrew speakers, namely, C-bias, and no consonant/vowel asymmetry among English speakers. We suggest that the observed interaction is morphologically anchored, indicating that phonological and morphological processes interact during early phases of auditory word perception.
Collapse
Affiliation(s)
- Yaara Lador-Weizman
- Seymour Fox School of Education, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Avital Deutsch
- Seymour Fox School of Education, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
3
|
Hegde M, Nazzi T, Cabrera L. An auditory perspective on phonological development in infancy. Front Psychol 2024; 14:1321311. [PMID: 38327506 PMCID: PMC10848800 DOI: 10.3389/fpsyg.2023.1321311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 12/11/2023] [Indexed: 02/09/2024] Open
Abstract
Introduction The auditory system encodes the phonetic features of languages by processing spectro-temporal modulations in speech, which can be described at two time scales: relatively slow amplitude variations over time (AM, further distinguished into the slowest <8-16 Hz and faster components 16-500 Hz), and frequency modulations (FM, oscillating at higher rates about 600-10 kHz). While adults require only the slowest AM cues to identify and discriminate speech sounds, infants have been shown to also require faster AM cues (>8-16 Hz) for similar tasks. Methods Using an observer-based psychophysical method, this study measured the ability of typical-hearing 6-month-olds, 10-month-olds, and adults to detect a change in the vowel or consonant features of consonant-vowel syllables when temporal modulations are selectively degraded. Two acoustically degraded conditions were designed, replacing FM cues with pure tones in 32 frequency bands, and then extracting AM cues in each frequency band with two different low-pass cut- off frequencies: (1) half the bandwidth (Fast AM condition), (2) <8 Hz (Slow AM condition). Results In the Fast AM condition, results show that with reduced FM cues, 85% of 6-month-olds, 72.5% of 10-month-olds, and 100% of adults successfully categorize phonemes. Among participants who passed the Fast AM condition, 67% of 6-month-olds, 75% of 10-month-olds, and 95% of adults passed the Slow AM condition. Furthermore, across the three age groups, the proportion of participants able to detect phonetic category change did not differ between the vowel and consonant conditions. However, age-related differences were observed for vowel categorization: while the 6- and 10-month-old groups did not differ from one another, they both independently differed from adults. Moreover, for consonant categorization, 10-month-olds were more impacted by acoustic temporal degradation compared to 6-month-olds, and showed a greater decline in detection success rates between the Fast AM and Slow AM conditions. Discussion The degradation of FM and faster AM cues (>8 Hz) appears to strongly affect consonant processing at 10 months of age. These findings suggest that between 6 and 10 months, infants show different developmental trajectories in the perceptual weight of speech temporal acoustic cues for vowel and consonant processing, possibly linked to phonological attunement.
Collapse
Affiliation(s)
- Monica Hegde
- Integrative Neuroscience and Cognition Center (INCC-UMR 8002), Université Paris Cité-CNRS, Paris, France
| | | | | |
Collapse
|
4
|
Gannon C, Hill RA, Lameira AR. Open plains are not a level playing field for hominid consonant-like versus vowel-like calls. Sci Rep 2023; 13:21138. [PMID: 38129443 PMCID: PMC10739746 DOI: 10.1038/s41598-023-48165-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023] Open
Abstract
Africa's paleo-climate change represents an "ecological black-box" along the evolutionary timeline of spoken language; a vocal hominid went in and, millions of years later, out came a verbal human. It is unknown whether or how a shift from forested, dense habitats towards drier, open ones affected hominid vocal communication, potentially setting stage for speech evolution. To recreate how arboreal proto-vowels and proto-consonants would have interacted with a new ecology at ground level, we assessed how a series of orangutan voiceless consonant-like and voiced vowel-like calls travelled across the savannah. Vowel-like calls performed poorly in comparison to their counterparts. Only consonant-like calls afforded effective perceptibility beyond 100 m distance without requiring repetition, as is characteristic of loud calling behaviour in nonhuman primates, typically composed by vowel-like calls. Results show that proto-consonants in human ancestors may have enhanced reliability of distance vocal communication across a canopy-to-ground ecotone. The ecological settings and soundscapes experienced by human ancestors may have had a more profound impact on the emergence and shape of spoken language than previously recognized.
Collapse
Affiliation(s)
| | - Russell A Hill
- Department of Anthropology, Durham University, Durham, UK
- Primate and Predator Project, Soutpansberg Mountains, Thohoyandou, South Africa
- Department of Biological Sciences, University of Venda, Thohoyandou, South Africa
| | | |
Collapse
|
5
|
Højen A, Madsen TO, Bleses D. Danish 20-month-olds' recognition of familiar words with and without consonant and vowel mispronunciations. PHONETICA 2023; 80:309-328. [PMID: 37533184 DOI: 10.1515/phon-2023-2001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2023]
Abstract
Although several studies initially supported the proposal by Nespor et al. (Nespor, Marina, Marcela Peña & Jacques Mehler. 2003. On the different roles of vowels and consonants in speech processing and language acquisition. Lingue e Linguaggio 2. 221-247) that consonants are more informative than vowels in lexical processing, a more complex picture has emerged from recent research. Current evidence suggests that infants initially show a vowel bias in lexical processing and later transition to a consonant bias, possibly depending on the characteristics of the ambient language. Danish infants have shown a vowel bias in word learning at 20 months-an age at which infants learning French or Italian no longer show a vowel bias but rather a consonant bias, and infants learning English show no bias. The present study tested whether Danish 20-month-olds also have a vowel bias when recognizing familiar words. Specifically, using the Intermodal Preferential Looking paradigm, we tested whether Danish infants were more likely to ignore or accept consonant than vowel mispronunciations when matching familiar words with pictures. The infants successfully matched correctly pronounced familiar words with pictures but showed no vowel or consonant bias when matching mispronounced words with pictures. The lack of a bias for Danish vowels or consonants in familiar word recognition adds to evidence that lexical processing biases are language-specific and may additionally depend on developmental age and perhaps task difficulty.
Collapse
Affiliation(s)
- Anders Højen
- School of Communication and Culture and TrygFonden's Centre for Child Research, Aarhus University, Aarhus V, Denmark
| | - Thomas O Madsen
- Department of Language, Culture, History and Communication, University of Southern Denmark, Odense, Denmark
| | - Dorthe Bleses
- School of Communication and Culture and TrygFonden's Centre for Child Research, Aarhus University, Aarhus V, Denmark
| |
Collapse
|
6
|
Cross-situational word learning of Cantonese Chinese. Psychon Bull Rev 2022:10.3758/s13423-022-02217-7. [DOI: 10.3758/s13423-022-02217-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2022] [Indexed: 11/16/2022]
|
7
|
Bouchon C, Hochmann JR, Toro JM. Spanish-learning infants switch from a vowel to a consonant bias during the first year of life. J Exp Child Psychol 2022; 221:105444. [DOI: 10.1016/j.jecp.2022.105444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 02/12/2022] [Accepted: 03/31/2022] [Indexed: 10/18/2022]
|
8
|
Creel SC. Preschoolers Have Difficulty Discriminating Novel Minimal-Pair Words. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2540-2553. [PMID: 35777741 DOI: 10.1044/2022_jslhr-22-00029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE The primary aim was to assess whether children have difficulty distinguishing similar-sounding novel words. The secondary aim was to assess what task characteristics might hinder or facilitate perceptual discrimination. METHOD Three within-subjects experiments tested ninety-nine 3- to 5-year-old children total. Experiment 1 presented two cartoon characters each saying a novel word. Children were asked to report whether they said the same word or different words. Words were identical (e.g., deev/deev), were dissimilar (deev/vush), differed in onset consonant voicing (deev/teev), or differed in vowel tenseness (deev/div). Experiment 2 added accuracy feedback after each trial to remind children of task instructions. Experiment 3 interspersed many "same" trials containing a repeating standard word to assess the role of bottom-up stimulus support on difference detection. RESULTS The d' scores were highest for dissimilar words, next highest on different-vowel pairs, and lowest on different-consonant pairs. Performance was better with repeated standard stimuli (Experiment 3) than without (Experiment 1). Benefits for repeated task instructions (Experiment 2) were marginal. Exploratory analyses comparing these results to findings in a word-learning study using the same stimuli suggest an imperfect match to how easily children can learn similar-sounding words. CONCLUSIONS Overall, similar-sounding novel words are challenging for children to discriminate perceptually, although discrimination scores exceeded chance for all levels of similarity. Clinically speaking, same/different tests may be less sensitive to sound discrimination than change/no-change tests. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.20151848.
Collapse
Affiliation(s)
- Sarah C Creel
- Department of Cognitive Science, SDSU-UCSD Joint Doctoral Program in Language and Communicative Disorders, University of California San Diego, La Jolla
| |
Collapse
|
9
|
Weyers I, Mueller J. A Special Role of Syllables, But Not Vowels or Consonants, for Nonadjacent Dependency Learning. J Cogn Neurosci 2022; 34:1467-1487. [PMID: 35604359 DOI: 10.1162/jocn_a_01874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Successful language processing entails tracking (morpho)syntactic relationships between distant units of speech, so-called nonadjacent dependencies (NADs). Many cues to such dependency relations have been identified, yet the linguistic elements encoding them have received little attention. In the present investigation, we tested whether and how these elements, here syllables, consonants, and vowels, affect behavioral learning success as well as learning-related changes in neural activity in relation to item-specific NAD learning. In a set of two EEG studies with adults, we compared learning under conditions where either all segment types (Experiment 1) or only one segment type (Experiment 2) was informative. The collected behavioral and ERP data indicate that, when all three segment types are available, participants mainly rely on the syllable for NAD learning. With only one segment type available for learning, adults also perform most successfully with syllable-based dependencies. Although we find no evidence for successful learning across vowels in Experiment 2, dependencies between consonants seem to be identified at least passively at the phonetic-feature level. Together, these results suggest that successful item-specific NAD learning may depend on the availability of syllabic information. Furthermore, they highlight consonants' distinctive power to support lexical processes. Although syllables show a clear facilitatory function for NAD learning, the underlying mechanisms of this advantage require further research.
Collapse
|
10
|
Frye CI, Creel SC. Perceptual flexibility in word learning: Preschoolers learn words with speech sound variability. BRAIN AND LANGUAGE 2022; 226:105078. [PMID: 35074621 DOI: 10.1016/j.bandl.2022.105078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 12/21/2021] [Accepted: 01/03/2022] [Indexed: 06/14/2023]
Abstract
Children's language input is rife with acoustic variability. Much of this variability may facilitate learning by highlighting unvarying, criterial speech attributes. But in many cases, learners experience variation in those criterial attributes themselves, as when hearing speakers with different accents. How flexible are children in the face of this variability? The current study taught 3-5-year-olds new words containing speech-sound variability: a single picture might be labeled both deev and teev. After learning, children's knowledge was tested by presenting two pictures and asking them to point to one. Picture-pointing accuracy and eye movements were tracked. While children pointed less accurately and looked less rapidly to dual-label than single-label words, they robustly exceeded chance. Performance was weaker when children learned two distinct labels, such as vayfe and fosh, for a single object. Findings suggest moderate learning even with speech-sound variability. One implication is that neural representations of speech contain rich gradient information.
Collapse
Affiliation(s)
- Conor I Frye
- Department of Cognitive Science, UC San Diego, La Jolla, CA, USA
| | - Sarah C Creel
- Department of Cognitive Science, UC San Diego, La Jolla, CA, USA.
| |
Collapse
|
11
|
White KS, Daub O. When it's not appropriate to adapt: Toddlers' learning of novel speech patterns is affected by visual information. BRAIN AND LANGUAGE 2021; 222:105022. [PMID: 34536771 DOI: 10.1016/j.bandl.2021.105022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 08/22/2021] [Accepted: 08/30/2021] [Indexed: 06/13/2023]
Abstract
In adults, perceptual learning for speech is constrained, such that learning of novel pronunciations is less likely to occur if the (e.g., visual) context indicates that they are transient. However, adults have had a lifetime of experience with the types of cues that signal stable vs. transient speech variation. We ask whether visual context affects toddlers' learning of a novel speech pattern. Across conditions, 19-month-olds (N = 117) were exposed to familiar words either pronounced typically or in a novel, consonant-shifting accent. During exposure, some toddlers heard the accented pronunciations without a face present; others saw a video of the speaker producing the words with a lollipop against her cheek or in her mouth. Toddlers showed the weakest learning of the accent when the speaker had the lollipop in her mouth, suggesting that they treated the lollipop as the cause of the atypical pronunciations. These results demonstrate that toddlers' adaptation to a novel speech pattern is influenced by extra-linguistic context.
Collapse
|
12
|
Trecca F, Bleses D, Højen A, Madsen TO, Christiansen MH. When Too Many Vowels Impede Language Processing: An Eye-Tracking Study of Danish-Learning Children. LANGUAGE AND SPEECH 2020; 63:898-918. [PMID: 31898932 DOI: 10.1177/0023830919893390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Research has suggested that Danish-learning children lag behind in early language acquisition. The phenomenon has been attributed to the opaque phonetic structure of Danish, which features an unusually large number of non-consonantal sounds (i.e., vowels and semivowels/glides). The large number of vocalic sounds in speech is thought to provide fewer cues to word segmentation and to make language processing harder, thus hindering the acquisition process. In this study, we explored whether the presence of vocalic sounds at word boundaries impedes real-time speech processing in 24-month-old Danish-learning children, compared to word boundaries that are marked by consonantal sounds. Using eye-tracking, we tested children's real-time comprehension of known consonant-initial and vowel-initial words when presented in either a consonant-final carrier phrase or in a vowel-final carrier phrase, thus resulting in the four boundary types C#C, C#V, V#C, and V#V. Our results showed that the presence of vocalic sounds around a word boundary-especially before-impedes processing of Danish child-directed sentences.
Collapse
Affiliation(s)
- Fabio Trecca
- School of Communication and Culture, Aarhus University, Denmark
| | | | - Anders Højen
- TrygFonden's Centre for Child Research, Aarhus University, Denmark
| | - Thomas O Madsen
- Department of Language and Communication, University of Southern Denmark, Denmark
| | - Morten H Christiansen
- Department of Psychology, Cornell University, NY; Interacting Minds Centre & School of Communication and Culture, Aarhus University, Denmark
| |
Collapse
|
13
|
The role of linguistic experience in the development of the consonant bias. Anim Cogn 2020; 24:419-431. [PMID: 33052544 DOI: 10.1007/s10071-020-01436-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 09/18/2020] [Accepted: 09/26/2020] [Indexed: 10/23/2022]
Abstract
Consonants and vowels play different roles in speech perception: listeners rely more heavily on consonant information rather than vowel information when distinguishing between words. This reliance on consonants for word identification is the consonant bias Nespor et al. (Ling 2:203-230, 2003). Several factors modulate infants' development of the consonant bias, including fine-grained temporal processing ability and native language exposure [for review, see Nazzi et al. (Curr Direct Psychol Sci 25:291-296, 2016)]. A rat model demonstrated that mature fine-grained temporal processing alone cannot account for consonant bias emergence; linguistic exposure is also necessary Bouchon and Toro (An Cog 22:839-850, 2019). This study tested domestic dogs, who have similarly fine-grained temporal processing but more language exposure than rats, to assess whether a minimal lexicon and small degree of regular linguistic exposure can allow for consonant bias development. Dogs demonstrated a vowel bias rather than a consonant bias, preferring their own name over a vowel-mispronounced version of their name, but not in comparison to a consonant-mispronounced version. This is the pattern seen in young infants Bouchon et al. (Dev Sci 18:587-598, 2015) and rats Bouchon et al. (An Cog 22:839-850, 2019). In a follow-up study, dogs treated a consonant-mispronounced version of their name similarly to their actual name, further suggesting that dogs do not treat consonant differences as meaningful for word identity. These results support the findings from Bouchon and Toro (An Cog 2:839-850, 2019), suggesting that there may be a default preference for vowel information over consonant information when identifying word forms, and that the consonant bias may be a human-exclusive tool for language learning.
Collapse
|
14
|
Von Holzen K, Nazzi T. Emergence of a consonant bias during the first year of life: New evidence from own-name recognition. INFANCY 2020; 25:319-346. [PMID: 32749054 DOI: 10.1111/infa.12331] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 02/07/2020] [Accepted: 02/23/2020] [Indexed: 11/27/2022]
Abstract
Recent evidence suggests that during the first year of life, a preference for consonant information during lexical processing (consonant bias) emerges, at least for some languages like French. Our study investigated the factors involved in this emergence as well as the developmental consequences for variation in consonant bias emergence. In a series of experiments, we measured 5-, 8-, and 11-month-old French-learning infants orientation times to a consonant or vowel mispronunciation of their own name, which is one of the few word forms familiar to infants at this young age. Both 5- and 8-month-olds oriented longer to vowel mispronunciations, but 11-month-olds showed a different pattern, initially orienting longer to consonant mispronunciations. We interpret these results as further evidence of an initial vowel bias, with consonant bias emergence by 11 months. Neither acoustic-phonetic nor lexical factors predicted preferences in 8- and 11-month-olds. Finally, counter to our predictions, a vowel bias at the time of test for 11-month-olds was related to later productive vocabulary outcomes.
Collapse
Affiliation(s)
- Katie Von Holzen
- Lehrstuhl Linguistik des Deutschen, Schwerpunkt Deutsch als Fremdsprache/Deutsch als Zweitsprache, Technische Universität Dortmund, Dortmund, Germany.,Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS (Integrative Neuroscience and Cognition Center, UMR 8002), Paris, France.,Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - Thierry Nazzi
- Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS (Integrative Neuroscience and Cognition Center, UMR 8002), Paris, France
| |
Collapse
|
15
|
Is the consonant bias specifically human? Long-Evans rats encode vowels better than consonants in words. Anim Cogn 2019; 22:839-850. [PMID: 31222546 DOI: 10.1007/s10071-019-01280-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 05/21/2019] [Accepted: 06/11/2019] [Indexed: 10/26/2022]
Abstract
In natural languages, vowels tend to convey structures (syntax, prosody) while consonants are more important lexically. The consonant bias, which is the tendency to rely more on consonants than on vowels to process words, is well attested in human adults and infants after the first year of life. Is the consonant bias based on evolutionarily ancient mechanisms, potentially present in other species? The current study investigated this issue in a species phylogenetically distant from humans: Long-Evans rats. During training, the animals were presented with four natural word-forms (e.g., mano, "hand"). We then compared their responses to novel words carrying either a consonant (pano) or a vowel change (meno). Results show that the animals were less disrupted by consonantal alterations than by vocalic alterations of words. That is, word recognition was more affected by the alteration of a vowel than a consonant. Together with previous findings in very young human infants, this reliance on vocalic information we observe in rats suggests that the emergence of the consonant bias may require a combination of vocal, cognitive and auditory skills that rodents do not seem to possess.
Collapse
|
16
|
VON Holzen K, Fennell CT, Mani N. The impact of cross-language phonological overlap on bilingual and monolingual toddlers' word recognition. BILINGUALISM (CAMBRIDGE, ENGLAND) 2019; 22:476-499. [PMID: 31080355 PMCID: PMC6508490 DOI: 10.1017/s1366728918000597] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We examined how L2 exposure early in life modulates toddler word recognition by comparing German-English bilingual and German monolingual toddlers' recognition of words that overlapped to differing degrees, measured by number of phonological features changed, between English and German (e.g., identical, 1-feature change, 2-feature change, 3-feature change, no overlap). Recognition in English was modulated by language background (bilinguals vs. monolinguals) and by the amount of phonological overlap that English words shared with their L1 German translations. L1 word recognition remained unchanged across conditions between monolingual and bilingual toddlers, showing no effect of learning an L2 on L1 word recognition in bilingual toddlers. Furthermore, bilingual toddlers who had a later age of L2 acquisition had better recognition of words in English than those toddlers who acquired English at an earlier age. The results suggest an important role for L1 phonological experience on L2 word recognition in early bilingual word recognition.
Collapse
Affiliation(s)
- Katie VON Holzen
- Department of Hearing and Speech Sciences, University of Maryland, USA
| | - Christopher T Fennell
- School of Psychology and the Department of Linguistics, University of Ottawa, Canada
| | - Nivedita Mani
- Psychology of Language Research Group, Georg-August-Universität Göttingen, Germany
| |
Collapse
|
17
|
Poltrock S, Chen H, Kwok C, Cheung H, Nazzi T. Adult Learning of Novel Words in a Non-native Language: Consonants, Vowels, and Tones. Front Psychol 2018; 9:1211. [PMID: 30087631 PMCID: PMC6066720 DOI: 10.3389/fpsyg.2018.01211] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 06/26/2018] [Indexed: 11/13/2022] Open
Abstract
While words are distinguished primarily by consonants and vowels in many languages, tones are also used in the majority of the world's languages to cue lexical contrasts. However, studies on novel word learning have largely concentrated on consonants and vowels. To shed more light on the use of tonal information in novel word learning and its relationship with the development of phonological categories, the present study explored how adults' ability to learn minimal pair pseudowords in a tone language is modulated by their native phonological knowledge. Twenty-four adult speakers of three languages were tested: Cantonese, Mandarin, and French. Eye-tracking was used to record eye movements of these learners, while they were watching animated cartoons in Cantonese. On each trial, adults had to learn two new label-object associations, while the labels differed minimally by a consonant, a vowel, or a tone. Learning would therefore attest to participants' ability to use phonological information to distinguish the paired words. Results first revealed that adult learners in each language group performed better than chance in all conditions. Moreover, compared to native Cantonese adults, both Mandarin- and French-speaking adults performed worse on all three contrasts. In addition, French adults were worse on tones when compared to Mandarin adults. Lastly, no advantage for consonantal information in native lexical processing was found for Cantonese-speaking adults as predicted by the “division of labor” proposal, thus confirming crosslinguistic differences in consonant/vowel weight between speakers of tonal vs. non-tonal languages. These findings establish rapid novel word learning in a non-native language (long-term learning will have to be further assessed), modulated by native phonological knowledge. The implications of the findings of this adult study for further infant word learning studies are discussed.
Collapse
Affiliation(s)
- Silvana Poltrock
- Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS, Laboratoire Psychologie de la Perception, Paris, France.,Department Linguistik, Universität Potsdam, Potsdam, Germany
| | - Hui Chen
- Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS, Laboratoire Psychologie de la Perception, Paris, France
| | - Celia Kwok
- Department of Linguistics and Modern Language Studies, The Education University of Hong Kong, Tai Po, Hong Kong
| | - Hintat Cheung
- Department of Linguistics and Modern Language Studies, The Education University of Hong Kong, Tai Po, Hong Kong
| | - Thierry Nazzi
- Université Paris Descartes, Sorbonne Paris Cité, Paris, France.,CNRS, Laboratoire Psychologie de la Perception, Paris, France
| |
Collapse
|
18
|
Mueller JL, Cate CT, Toro JM. A Comparative Perspective on the Role of Acoustic Cues in Detecting Language Structure. Top Cogn Sci 2018; 12:859-874. [PMID: 30033636 DOI: 10.1111/tops.12373] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 05/20/2018] [Accepted: 06/20/2018] [Indexed: 12/01/2022]
Abstract
Most human language learners acquire language primarily via the auditory modality. This is one reason why auditory artificial grammars play a prominent role in the investigation of the development and evolutionary roots of human syntax. The present position paper brings together findings from human and non-human research on the impact of auditory cues on learning about linguistic structures with a special focus on how different types of cues and biases in auditory cognition may contribute to success and failure in artificial grammar learning (AGL). The basis of our argument is the link between auditory cues and syntactic structure across languages and development. Cross-species comparison suggests that many aspects of auditory cognition that are relevant for language are not human specific and are present even in rather distantly related species. Furthermore, auditory cues and biases impact on learning, which we will discuss in the example of auditory perception and AGL studies. This observation, together with the significant role of auditory cues in language processing, supports the idea that auditory cues served as a bootstrap to syntax during language evolution. Yet this also means that potentially human-specific syntactic abilities are not due to basic auditory differences between humans and non-human animals but are based upon more advanced cognitive processes.
Collapse
Affiliation(s)
| | - Carel Ten Cate
- Institute of Biology, Leiden University.,Leiden Institute for Brain and Cognition
| | - Juan M Toro
- ICREA (Institució Catalana de Recerca I Estudis Avançats).,Center for Brain and Cognition, University Pompeu Fabra
| |
Collapse
|
19
|
Nazzi T, Polka L. The consonant bias in word learning is not determined by position within the word: Evidence from vowel-initial words. J Exp Child Psychol 2018; 174:103-111. [PMID: 29920448 DOI: 10.1016/j.jecp.2018.05.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 05/21/2018] [Accepted: 05/21/2018] [Indexed: 10/28/2022]
Abstract
The current study used an object manipulation task to explore whether infants rely more on consonant information than on vowel information when learning new words even when the words start with a vowel. Canadian French-learning 20-month-olds, who were taught pairs of new vowel-initial words contrasted either on their initial vowel (opsi/eupsi) or following consonant (oupsa/outsa), were found to have learned the words only in the consonant condition and performed significantly better in the consonant condition than in the vowel condition. These results extend to Canadian French-learning infants the consonant bias in word learning previously found in French-learning infants from France and, crucially, shows that vocalic information has less weight than consonant information in new word learning even when it is the initial sound of the target words, confirming the consonant bias at the lexical level postulated by Nespor et al. (2003). The current findings also suggest that French-learning infants are able to segment vowel-initial words as early as 20 months of age.
Collapse
Affiliation(s)
- Thierry Nazzi
- Université Paris Descartes, 75006 Paris, France; Centre National de la Recherche Scientifique (CNRS), Laboratoire Psychologie de la Perception, Institut Pluridisciplinaire des Saints Pères, 75270 Paris, France.
| | - Linda Polka
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec H3A 1G1, Canada; Centre for Research on Brain, Language and Music, McGill University, Montréal, Quebec H3A 1G1, Canada
| |
Collapse
|
20
|
Von Holzen K, Nishibayashi LL, Nazzi T. Consonant and Vowel Processing in Word Form Segmentation: An Infant ERP Study. Brain Sci 2018; 8:E24. [PMID: 29385046 PMCID: PMC5836043 DOI: 10.3390/brainsci8020024] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 01/19/2018] [Accepted: 01/25/2018] [Indexed: 11/16/2022] Open
Abstract
Segmentation skill and the preferential processing of consonants (C-bias) develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs) to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders), while a minority showed a positive-going response (Positive Responders). Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill) and mispronunciations (evaluating phonological processing) at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation) and later lexical skills.
Collapse
Affiliation(s)
- Katie Von Holzen
- Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes, 75006 Paris, France.
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20740, USA.
| | - Leo-Lyuki Nishibayashi
- Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes, 75006 Paris, France.
- Laboratory for Language Development, Riken Brain Science Institute, Wako-shi, Saitama-ken 351-0198, Japan.
| | - Thierry Nazzi
- Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes, 75006 Paris, France.
| |
Collapse
|
21
|
Hochmann JR, Benavides-Varela S, Fló A, Nespor M, Mehler J. Bias for Vocalic Over Consonantal Information in 6-Month-Olds. INFANCY 2017. [DOI: 10.1111/infa.12203] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Jean-Rémy Hochmann
- CNRS-Institut des Sciences Cognitives -Marc Jeannerod-UMR 5304; Univ Lyon
| | | | - Ana Fló
- Cognitive Neuroscience Department; SISSA, International School for Advanced Studies
| | - Marina Nespor
- Cognitive Neuroscience Department; SISSA, International School for Advanced Studies
| | - Jacques Mehler
- Cognitive Neuroscience Department; SISSA, International School for Advanced Studies
| |
Collapse
|
22
|
Monte-Ordoño J, Toro JM. Different ERP profiles for learning rules over consonants and vowels. Neuropsychologia 2017; 97:104-111. [PMID: 28232218 DOI: 10.1016/j.neuropsychologia.2017.02.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Revised: 02/16/2017] [Accepted: 02/18/2017] [Indexed: 11/28/2022]
Abstract
The Consonant-Vowel hypothesis suggests that consonants and vowels tend to be used differently during language processing. In this study we explored whether these functional differences trigger different neural responses in a rule learning task. We recorded ERPs while nonsense words were presented in an Oddball paradigm. An ABB rule was implemented either over the consonants (Consonant condition) or over the vowels (Vowel condition) composing standard words. Deviant stimuli were composed by novel phonemes. Deviants could either implement the same ABB rule as standards (Phoneme deviants) or implement a different ABA rule (Rule deviants). We observed shared early components (P1 and MMN) for both types of deviants across both conditions. We also observed differences across conditions around 400ms. In the Consonant condition, Phoneme deviants triggered a posterior negativity. In the Vowel condition, Rule deviants triggered an anterior negativity. Such responses demonstrate different neural responses after the violation of abstract rules over distinct phonetic categories.
Collapse
Affiliation(s)
| | - Juan M Toro
- Universitat Pompeu Fabra, C. Roc Boronat, 138, 08018 Barcelona, Spain; ICREA, Pg. Lluís Companys, 23, 08010 Barcelona, Spain
| |
Collapse
|
23
|
Nishibayashi LL, Nazzi T. Vowels, then consonants: Early bias switch in recognizing segmented word forms. Cognition 2016; 155:188-203. [PMID: 27428809 DOI: 10.1016/j.cognition.2016.07.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Revised: 06/25/2016] [Accepted: 07/05/2016] [Indexed: 11/29/2022]
Abstract
The division of labor hypothesis proposed by Nespor, Peña, and Mehler (2003) postulates that consonants are more important than vowels in lexical processing (when learning and recognizing words). This consonant bias (C-bias) is supported by many adult and toddler studies. However, some cross-linguistic variation has been found in toddlerhood, and various hypotheses have been proposed to account for the origin of the consonant bias, which make distinct predictions regarding its developmental trajectory during the first year of life. The present study evaluated these hypotheses by investigating the consonant bias in young French-learning infants, a language in which a consistent consonant bias is reported from 11months of age onward. Accordingly, in a series of word form segmentation experiments building on the fact that both 6- and 8-month-old French-learning infants can segment monosyllabic words, we investigated the relative impact of consonant and vowel mispronunciations on the recognition of segmented word forms at these two ages. Infants were familiarized with passages containing monosyllabic target words and then tested in different conditions all including consonant and/or vowel mispronunciations of the target words. Overall, our findings reveal a consonant bias at 8months, but an opposite vowel bias at 6months. These findings first establish that the consonant bias emerges between 6 and 8months of age in French-learning infants. Second, we discuss the factors that might explain such a developmental trajectory, highlighting the possible roles of pre-lexical and phonological acquisition.
Collapse
Affiliation(s)
| | - Thierry Nazzi
- Université Paris Descartes, Sorbonne Paris Cité, Paris, France; CNRS - Laboratoire de Psychologie de la Perception (UMR8242), Paris, France
| |
Collapse
|