1
|
Berger S, Batterink LJ. Children extract a new linguistic rule more quickly than adults. Dev Sci 2024:e13498. [PMID: 38517035 DOI: 10.1111/desc.13498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 12/19/2023] [Accepted: 02/26/2024] [Indexed: 03/23/2024]
Abstract
Children achieve better long-term language outcomes than adults. However, it remains unclear whether children actually learn language more quickly than adults during real-time exposure to input-indicative of true superior language learning abilities-or whether this advantage stems from other factors. To examine this issue, we compared the rate at which children (8-10 years) and adults extracted a novel, hidden linguistic rule, in which novel articles probabilistically predicted the animacy of associated nouns (e.g., "gi lion"). Participants categorized these two-word phrases according to a second, explicitly instructed rule over two sessions, separated by an overnight delay. Both children and adults successfully learned the hidden animacy rule through mere exposure to the phrases, showing slower response times and decreased accuracy to occasional phrases that violated the rule. Critically, sensitivity to the hidden rule emerged much more quickly in children than adults; children showed a processing cost for violation trials from very early on in learning, whereas adults did not show reliable sensitivity to the rule until the second session. Children also showed superior generalization of the hidden animacy rule when asked to classify nonword trials (e.g., "gi badupi") according to the hidden animacy rule. Children and adults showed similar retention of the hidden rule over the delay period. These results provide insight into the nature of the critical period for language, suggesting that children have a true advantage over adults in the rate of implicit language learning. Relative to adults, children more rapidly extract hidden linguistic structures during real-time language exposure. RESEARCH HIGHLIGHTS: Children and adults both succeeded in implicitly learning a novel, uninstructed linguistic rule, based solely on exposure to input. Children learned the novel linguistic rules much more quickly than adults. Children showed better generalization performance than adults when asked to apply the novel rule to nonsense words without semantic content. Results provide insight into the nature of critical period effects in language, indicating that children have an advantage over adults in real-time language learning.
Collapse
Affiliation(s)
- Sarah Berger
- Department of Psychology, University of Western Ontario, London, Canada
| | - Laura J Batterink
- Department of Psychology, University of Western Ontario, London, Canada
| |
Collapse
|
2
|
Steffman J, Sundara M. Short-term exposure alters adult listeners' perception of segmental phonotactics. JASA EXPRESS LETTERS 2023; 3:125202. [PMID: 38085137 DOI: 10.1121/10.0023900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 11/21/2023] [Indexed: 12/18/2023]
Abstract
This study evaluates the malleability of adults' perception of probabilistic phonotactic (biphone) probabilities, building on a body of literature on statistical phonotactic learning. It was first replicated that listeners categorize phonetic continua as sounds that create higher-probability sequences in their native language. Listeners were also exposed to skewed distributions of biphone contexts, which resulted in the enhancement or reversal of these effects. Thus, listeners dynamically update biphone probabilities (BPs) and bring this to bear on perception of ambiguous acoustic information. These effects can override long-term BP effects rooted in native language experience.
Collapse
Affiliation(s)
- Jeremy Steffman
- Linguistics and English Language, The University of Edinburgh, Edinburgh, EH8 9AD, United Kingdom
| | - Megha Sundara
- Linguistics, University of California, Los Angeles, California 90095, ,
| |
Collapse
|
3
|
Jeong H, van den Hoven E, Madec S, Bürki A. Behavioral and Brain Responses Highlight the Role of Usage in the Preparation of Multiword Utterances for Production. J Cogn Neurosci 2021; 33:2231-2264. [PMID: 34272953 DOI: 10.1162/jocn_a_01757] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Usage-based theories assume that all aspects of language processing are shaped by the distributional properties of the language. The frequency not only of words but also of larger chunks plays a major role in language processing. These theories predict that the frequency of phrases influences the time needed to prepare these phrases for production and their acoustic duration. By contrast, dominant psycholinguistic models of utterance production predict no such effects. In these models, the system keeps track of the frequency of individual words but not of co-occurrences. This study investigates the extent to which the frequency of phrases impacts naming latencies and acoustic duration with a balanced design, where the same words are recombined to build high- and low-frequency phrases. The brain signal of participants is recorded so as to obtain information on the electrophysiological bases and functional locus of frequency effects. Forty-seven participants named pictures using high- and low-frequency adjective-noun phrases. Naming latencies were shorter for high-frequency than low-frequency phrases. There was no evidence that phrase frequency impacted acoustic duration. The electrophysiological signal differed between high- and low-frequency phrases in time windows that do not overlap with conceptualization or articulation processes. These findings suggest that phrase frequency influences the preparation of phrases for production, irrespective of the lexical properties of the constituents, and that this effect originates at least partly when speakers access and encode linguistic representations. Moreover, this study provides information on how the brain signal recorded during the preparation of utterances changes with the frequency of word combinations.
Collapse
|
4
|
Katz J, Moore MW. Phonetic Effects in Child and Adult Word Segmentation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:854-869. [PMID: 33571028 DOI: 10.1044/2020_jslhr-19-00275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The aim of the study was to investigate the effects of specific acoustic patterns on word learning and segmentation in 8- to 11-year-old children and in college students. Method Twenty-two children (ages 8;2-11;4 [years;months]) and 36 college students listened to synthesized "utterances" in artificial languages consisting of six iterated "words," which followed either a phonetically natural lenition-fortition pattern or an unnatural (cross-linguistically unattested) antilenition pattern. A two-alternative forced-choice task tested whether they could discriminate between occurring and nonoccurring sequences. Participants were exposed to both languages, counterbalanced for order across subjects, in sessions spaced at least 1 month apart. Results Children showed little evidence for learning in either the phonetically natural or unnatural condition nor evidence of differences in learning across the two conditions. Adults showed the predicted (and previously attested) interaction between learning and phonetic condition: The phonetically natural language was learned better. The adults also showed a strong effect of session: Subjects performed much worse during the second session than the first. Conclusions School-age children not only failed to demonstrate the phonetic asymmetry demonstrated by adults in previous studies but also failed to show strong evidence for any learning at all. The fact that the phonetic asymmetry (and general learning effect) was replicated with adults suggests that the child result is not due to inadequate stimuli or procedures. The strong carryover effect for adults also suggests that they retain knowledge about the sound patterns of an artificial language for over a month, longer than has been reported in laboratory studies of purely phonetic/phonological learning. Supplemental Material https://doi.org/10.23641/asha.13641284.
Collapse
Affiliation(s)
- Jonah Katz
- Department of World Languages, Literatures, and Linguistics, West Virginia University, Morgantown
| | - Michelle W Moore
- Department of Communication Sciences and Disorders, West Virginia University, Morgantown
| |
Collapse
|
5
|
Abstract
Speech errors are sensitive to newly learned phonotactic constraints. For example, if speakers produce strings of syllables in which /f/ is an onset if the vowel is /æ/, but a coda if the vowel is /I/, their slips will respect that constraint after a period of sleep. Constraints in which the contextual factor is nonlinguistic, however, do not appear to be learnable by this method—for example, /f/ is an onset if the speech rate is fast, but /f/ is a coda if the speech rate is slow. The present study demonstrated that adult English speakers can learn (after a sleep period) constraints based on stress (e.g., /f/ is an onset if the syllable is stressed, but /f/ is a coda if the syllable is unstressed), but cannot learn analogous constraints based on tone (e.g., /f/ is an onset if the tone is rising, but /f/ is a coda if the tone is falling). The results are consistent with the fact that, in English, stress is a relevant lexical phonological property (e.g., “INsight” and “inCITE” are different words), but tone is not (e.g., “yes!” and “yes?” are the same word, despite their different pragmatic functions). The results provide useful constraints on how consolidation effects in learning may interact with early learning experiences.
Collapse
|
6
|
Alderete J, Tupper P. Phonological regularity, perceptual biases, and the role of phonotactics in speech error analysis. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 9:e1466. [PMID: 29847014 DOI: 10.1002/wcs.1466] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 04/12/2018] [Accepted: 04/27/2018] [Indexed: 11/10/2022]
Abstract
Speech errors involving manipulations of sounds tend to be phonologically regular in the sense that they obey the phonotactic rules of well-formed words. We review the empirical evidence for phonological regularity in prior research, including both categorical assessments of words and regularity at the granular level involving specific segments and contexts. Since the reporting of regularity is affected by human perceptual biases, we also document this regularity in a new data set of 2,228 sublexical errors that was collected using methods that are demonstrably less prone to bias. These facts validate the claim that sound errors are overwhelmingly regular, but the new evidence suggests speech errors admit more phonologically ill-formed words than previously thought. Detailed facts of the phonological structure of errors, including this revised standard, are then related to model assumptions in contemporary theories of phonological encoding. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Computational Models of Language Psychology > Language.
Collapse
Affiliation(s)
| | - Paul Tupper
- Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
7
|
The role of consolidation in learning context-dependent phonotactic patterns in speech and digital sequence production. Proc Natl Acad Sci U S A 2018; 115:3617-3622. [PMID: 29555766 DOI: 10.1073/pnas.1721107115] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learning is revealed in their speech errors. First-order patterns, such as "/f/ must be a syllable onset," can be distinguished from contingent, or second-order, patterns, such as "/f/ must be an onset if the vowel is /a/, but a coda if the vowel is /o/." A metaanalysis of 19 experiments clearly demonstrated that first-order patterns affect speech errors to a very great extent in a single experimental session, but second-order vowel-contingent patterns only affect errors on the second day of testing, suggesting the need for a consolidation period. Two experiments tested an analogue to these studies involving sequences of button pushes, with fingers as "consonants" and thumbs as "vowels." The button-push errors revealed two of the key speech-error findings: first-order patterns are learned quickly, but second-order thumb-contingent patterns are only strongly revealed in the errors on the second day of testing. The influence of computational complexity on the implicit learning of phonotactic patterns in speech production may be a general feature of sequence production.
Collapse
|
8
|
Goldrick M. Encoding of distributional regularities independent of markedness: Evidence from unimpaired speakers. Cogn Neuropsychol 2018; 34:476-481. [PMID: 29457555 DOI: 10.1080/02643294.2017.1421149] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Romani, Galuzzi, Guariglia, and Goslin (Comparing phoneme frequency, age of acquisition and loss in aphasia: Implications for phonological universals. Cognitive Neuropsychology) used speech error data from individuals with acquired impairments to argue that independent from articulatory complexity, within-language distributional regularities influence the processing of sound structure in speech production. Converging evidence from unimpaired speakers is reviewed, focusing on speech errors in language production. Future research should examine how articulatory and frequency factors are integrated in language processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- a Department of Linguistics , Northwestern University , Evanston , IL , USA
| |
Collapse
|
9
|
Havas V, Taylor J, Vaquero L, de Diego-Balaguer R, Rodríguez-Fornells A, Davis MH. Semantic and phonological schema influence spoken word learning and overnight consolidation. Q J Exp Psychol (Hove) 2018; 71:1469-1481. [PMID: 28856956 DOI: 10.1080/17470218.2017.1329325] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Collapse
Affiliation(s)
- Viktória Havas
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,2 Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain.,3 Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jsh Taylor
- 4 Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, UK.,5 Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Lucía Vaquero
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Ruth de Diego-Balaguer
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,2 Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain.,6 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain.,7 Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Antoni Rodríguez-Fornells
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,2 Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain.,6 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Matthew H Davis
- 4 Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, UK
| |
Collapse
|
10
|
White KS, Chambers KE, Miller Z, Jethava V. Listeners learn phonotactic patterns conditioned on suprasegmental cues. Q J Exp Psychol (Hove) 2016; 70:2560-2576. [PMID: 27734753 DOI: 10.1080/17470218.2016.1247896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Language learners are sensitive to phonotactic patterns from an early age, and can acquire both simple and 2nd-order positional restrictions contingent on segment identity (e.g., /f/ is an onset with /æ/but a coda with /ɪ/). The present study explored the learning of phonototactic patterns conditioned on a suprasegmental cue: lexical stress. Adults first heard non-words in which trochaic and iambic items had different consonant restrictions. In Experiment 1, participants trained with phonotactic patterns involving natural classes of consonants later falsely recognized novel items that were consistent with the training patterns (legal items), demonstrating that they had learned the stress-conditioned phonotactic patterns. However, this was only true for iambic items. In Experiment 2, participants completed a forced-choice test between novel legal and novel illegal items and were again successful only for the iambic items. Experiment 3 demonstrated learning for trochaic items when they were presented alone. Finally, in Experiment 4, in which the training phase was lengthened, participants successfully learned both sets of phonotactic patterns. These experiments provide evidence that learners consider more global phonological properties in the computation of phonotactic patterns, and that learners can acquire multiple sets of patterns simultaneously, even contradictory ones.
Collapse
Affiliation(s)
- Katherine S White
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | - Kyle E Chambers
- b Department of Psychology , Gustavus Adolphus College , St Peter , MN , USA
| | - Zachary Miller
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | - Vibhuti Jethava
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| |
Collapse
|
11
|
Kittredge AK, Dell GS. Learning to speak by listening: Transfer of phonotactics from perception to production. JOURNAL OF MEMORY AND LANGUAGE 2016; 89:8-22. [PMID: 27840556 PMCID: PMC5102624 DOI: 10.1016/j.jml.2015.08.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The language production and perception systems rapidly learn novel phonotactic constraints. In production, for example, producing syllables in which /f/ is restricted to onset position (e.g. as /h/ is in English) causes one's speech errors to mirror that restriction. We asked whether or not perceptual experience of a novel phonotactic distribution transfers to production. In three experiments, participants alternated hearing and producing strings of syllables. In the same condition, the production and perception trials followed identical phonotactics (e.g. /f/ is onset). In the opposite condition, they followed reverse constraints (e.g. /f/ is onset for production, but /f/ is coda for perception). The tendency for speech errors to follow the production constraint was diluted when the opposite pattern was present on perception trials, thus demonstrating transfer of learning from perception to production. Transfer only occurred for perceptual tasks that may involve internal production, including an error monitoring task, which we argue engages production via prediction.
Collapse
Affiliation(s)
- Audrey K. Kittredge
- Psychology Department, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA, 1-607-351-7518
| | - Gary S. Dell
- Beckman Institute, University of Illinois, Urbana-Champaign, 405 N. Matthews Ave, Urbana, IL 61801, USA
| |
Collapse
|
12
|
Warker JA, Dell GS. New phonotactic constraints learned implicitly by producing syllable strings generalize to the production of new syllables. J Exp Psychol Learn Mem Cogn 2015; 41:1902-10. [PMID: 26030628 DOI: 10.1037/xlm0000143] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Novel phonotactic constraints can be acquired by hearing or speaking syllables that follow a novel constraint. When learned from hearing syllables, these newly learned constraints generalize to syllables that were not experienced during training. However, generalization of phonotactic learning to novel syllables has never been persuasively demonstrated in production. The typical production experiment examines phonotactic learning through speech errors. After participants repeat syllable sequences embedded with a novel phonotactic constraint, such as /f/ appearing only in onset position, their speech errors come to adhere to the novel constraint. For example, when participants mistakenly move an /f/ to another syllable, it overwhelmingly moves to an onset rather than a coda position. We assessed whether constraints learned and measured in this manner generalize to unexperienced syllables and, at the same time, whether the slips tend to create previously experienced syllables (a syllable priming effect). We found evidence of generalization but not of syllable priming in participants' speech errors. The effect of phonotactic learning was as powerfully expressed during the production of unexperienced as experienced syllables. A connectionist model simulated the experimental results using a single learning mechanism and successfully reproduced the constraint learning, generalization, and lack of priming.
Collapse
Affiliation(s)
| | - Gary S Dell
- Beckman Institute, University of Illinois at Urbana-Champaign
| |
Collapse
|
13
|
Bernard A. An onset is an onset: Evidence from abstraction of newly-learned phonotactic constraints. JOURNAL OF MEMORY AND LANGUAGE 2015; 78:18-32. [PMID: 25378800 PMCID: PMC4217139 DOI: 10.1016/j.jml.2014.09.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Phonotactic constraints are language-specific patterns in the sequencing of speech sounds. Are these constraints represented at the syllable level (ng cannot begin syllables in English) or at the word level (ng cannot begin words)? In a continuous recognition-memory task, participants more often falsely recognized novel test items that followed than violated the training constraints, whether training and test items matched in word structure (one or two syllables) or position of restricted consonants (word-edge or word-medial position). E.g., learning that ps are onsets and fs codas, participants generalized from pef (one syllable) to putvif (two syllables), and from putvif (word-edge positions) to bufpak (word-medial positions). These results suggest that newly-learned phonotactic constraints are represented at the syllable level. The syllable is a representational unit available and spontaneously used when learning speech-sound constraints. In the current experiments, an onset is an onset and a coda a coda, regardless of word structure or word position.
Collapse
|
14
|
Gaskell MG, Warker J, Lindsay S, Frost R, Guest J, Snowdon R, Stackhouse A. Sleep underpins the plasticity of language production. Psychol Sci 2014; 25:1457-65. [PMID: 24894583 DOI: 10.1177/0956797614535937] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Accepted: 04/04/2014] [Indexed: 11/15/2022] Open
Abstract
The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.
Collapse
|