1
|
Steffman J, Sundara M. Short-term exposure alters adult listeners' perception of segmental phonotactics. JASA EXPRESS LETTERS 2023; 3:125202. [PMID: 38085137 DOI: 10.1121/10.0023900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 11/21/2023] [Indexed: 12/18/2023]
Abstract
This study evaluates the malleability of adults' perception of probabilistic phonotactic (biphone) probabilities, building on a body of literature on statistical phonotactic learning. It was first replicated that listeners categorize phonetic continua as sounds that create higher-probability sequences in their native language. Listeners were also exposed to skewed distributions of biphone contexts, which resulted in the enhancement or reversal of these effects. Thus, listeners dynamically update biphone probabilities (BPs) and bring this to bear on perception of ambiguous acoustic information. These effects can override long-term BP effects rooted in native language experience.
Collapse
Affiliation(s)
- Jeremy Steffman
- Linguistics and English Language, The University of Edinburgh, Edinburgh, EH8 9AD, United Kingdom
| | - Megha Sundara
- Linguistics, University of California, Los Angeles, California 90095, ,
| |
Collapse
|
2
|
Goffman L, Gerken L. A developmental account of the role of sequential dependencies in typical and atypical language learners. Cogn Neuropsychol 2023; 40:243-264. [PMID: 37963089 PMCID: PMC10939949 DOI: 10.1080/02643294.2023.2275837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 10/23/2023] [Indexed: 11/16/2023]
Abstract
The Gerken lab has shown that infants are able to learn sound patterns that obligate local sequential dependencies that are no longer readily accessible to adults. The Goffman lab has shown that children with developmental language disorder (DLD) exhibit deficits in learning sequential dependencies that influence the acquisition of words and grammar, as well as other types of domain general sequences. Thus, DLD appears to be an impaired ability to detect and deploy sequential dependencies over multiple domains. We meld these two lines of research to propose a novel account in which sequential dependency learning is required for many phonological and morphosyntactic patterns in natural language and is also central to the language and domain general deficits that are attested in DLD. However, patterns that are not dependent on sequential dependencies but rather on networks of stored forms are learnable by children with DLD as well as by adults.
Collapse
Affiliation(s)
- Lisa Goffman
- Callier Center, Speech, Language, & Hearing, University of Texas at Dallas, Richardson, USA
| | - LouAnn Gerken
- Psychology & Cognitive Science, University of Arizona, Tucson, USA
| |
Collapse
|
3
|
Nenadić F, Tucker BV, Ten Bosch L. Computational Modeling of an Auditory Lexical Decision Experiment Using DIANA. LANGUAGE AND SPEECH 2022:238309221111752. [PMID: 36000386 PMCID: PMC10394956 DOI: 10.1177/00238309221111752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We present an implementation of DIANA, a computational model of spoken word recognition, to model responses collected in the Massive Auditory Lexical Decision (MALD) project. DIANA is an end-to-end model, including an activation and decision component that takes the acoustic signal as input, activates internal word representations, and outputs lexicality judgments and estimated response latencies. Simulation 1 presents the process of creating acoustic models required by DIANA to analyze novel speech input. Simulation 2 investigates DIANA's performance in determining whether the input signal is a word present in the lexicon or a pseudoword. In Simulation 3, we generate estimates of response latency and correlate them with general tendencies in participant responses in MALD data. We find that DIANA performs fairly well in free word recognition and lexical decision. However, the current approach for estimating response latency provides estimates opposite to those found in behavioral data. We discuss these findings and offer suggestions as to what a contemporary model of spoken word recognition should be able to do.
Collapse
Affiliation(s)
- Filip Nenadić
- University of Alberta, Canada; Singidunum University, Serbia
| | | | | |
Collapse
|
4
|
Isbilen ES, McCauley SM, Kidd E, Christiansen MH. Statistically Induced Chunking Recall: A Memory-Based Approach to Statistical Learning. Cogn Sci 2021; 44:e12848. [PMID: 32608077 DOI: 10.1111/cogs.12848] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 03/17/2020] [Accepted: 04/27/2020] [Indexed: 11/30/2022]
Abstract
The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking, may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short-term memory is fundamentally shaped by long-term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word-like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test-retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two-alternative forced-choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.
Collapse
Affiliation(s)
| | | | - Evan Kidd
- Language Development Department, Max Planck Institute for Psycholinguistics.,Research School of Psychology, The Australian National University.,ARC Centre of Excellence for the Dynamics of Language
| | - Morten H Christiansen
- Department of Psychology, Cornell University.,ARC Centre of Excellence for the Dynamics of Language.,School of Communication and Culture, Aarhus University.,Haskins Laboratories
| |
Collapse
|
5
|
Jeong H, van den Hoven E, Madec S, Bürki A. Behavioral and Brain Responses Highlight the Role of Usage in the Preparation of Multiword Utterances for Production. J Cogn Neurosci 2021; 33:2231-2264. [PMID: 34272953 DOI: 10.1162/jocn_a_01757] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Usage-based theories assume that all aspects of language processing are shaped by the distributional properties of the language. The frequency not only of words but also of larger chunks plays a major role in language processing. These theories predict that the frequency of phrases influences the time needed to prepare these phrases for production and their acoustic duration. By contrast, dominant psycholinguistic models of utterance production predict no such effects. In these models, the system keeps track of the frequency of individual words but not of co-occurrences. This study investigates the extent to which the frequency of phrases impacts naming latencies and acoustic duration with a balanced design, where the same words are recombined to build high- and low-frequency phrases. The brain signal of participants is recorded so as to obtain information on the electrophysiological bases and functional locus of frequency effects. Forty-seven participants named pictures using high- and low-frequency adjective-noun phrases. Naming latencies were shorter for high-frequency than low-frequency phrases. There was no evidence that phrase frequency impacted acoustic duration. The electrophysiological signal differed between high- and low-frequency phrases in time windows that do not overlap with conceptualization or articulation processes. These findings suggest that phrase frequency influences the preparation of phrases for production, irrespective of the lexical properties of the constituents, and that this effect originates at least partly when speakers access and encode linguistic representations. Moreover, this study provides information on how the brain signal recorded during the preparation of utterances changes with the frequency of word combinations.
Collapse
|
6
|
Gow DW, Schoenhaut A, Avcu E, Ahlfors SP. Behavioral and Neurodynamic Effects of Word Learning on Phonotactic Repair. Front Psychol 2021; 12:590155. [PMID: 33776832 PMCID: PMC7987836 DOI: 10.3389/fpsyg.2021.590155] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 02/04/2021] [Indexed: 11/22/2022] Open
Abstract
Processes governing the creation, perception and production of spoken words are sensitive to the patterns of speech sounds in the language user's lexicon. Generative linguistic theory suggests that listeners infer constraints on possible sound patterning from the lexicon and apply these constraints to all aspects of word use. In contrast, emergentist accounts suggest that these phonotactic constraints are a product of interactive associative mapping with items in the lexicon. To determine the degree to which phonotactic constraints are lexically mediated, we observed the effects of learning new words that violate English phonotactic constraints (e.g., srigin) on phonotactic perceptual repair processes in nonword consonant-consonant-vowel (CCV) stimuli (e.g., /sre/). Subjects who learned such words were less likely to "repair" illegal onset clusters (/sr/) and report them as legal ones (/∫r/). Effective connectivity analyses of MRI-constrained reconstructions of simultaneously collected magnetoencephalography (MEG) and EEG data showed that these behavioral shifts were accompanied by changes in the strength of influences of lexical areas on acoustic-phonetic areas. These results strengthen the interpretation of previous results suggesting that phonotactic constraints on perception are produced by top-down lexical influences on speech processing.
Collapse
Affiliation(s)
- David W. Gow
- Department of Neurology, Massachusetts General Hospital, Boston, MA, United States
- Department of Psychology, Salem State University, Salem, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, United States
| | - Adriana Schoenhaut
- Department of Neurology, Massachusetts General Hospital, Boston, MA, United States
| | - Enes Avcu
- Department of Neurology, Massachusetts General Hospital, Boston, MA, United States
| | - Seppo P. Ahlfors
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
7
|
Learning to predict: Neuronal signatures of auditory expectancy in human event-related potentials. Neuroimage 2020; 225:117472. [PMID: 33099012 PMCID: PMC9215305 DOI: 10.1016/j.neuroimage.2020.117472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 10/08/2020] [Accepted: 10/15/2020] [Indexed: 12/31/2022] Open
Abstract
Learning to anticipate future states of the world based on statistical regularities in the environment is a key component of perception and is vital for the survival of many organisms. Such statistical learning and prediction are crucial for acquiring language and music appreciation. Importantly, learned expectations can be implicitly derived from exposure to sensory input, without requiring explicit information regarding contingencies in the environment. Whereas many previous studies of statistical learning have demonstrated larger neuronal responses to unexpected versus expected stimuli, the neuronal bases of the expectations themselves remain poorly understood. Here we examined behavioral and neuronal signatures of learned expectancy via human scalp-recorded event-related brain potentials (ERPs). Participants were instructed to listen to a series of sounds and press a response button as quickly as possible upon hearing a target noise burst, which was either reliably or unreliably preceded by one of three pure tones in low-, mid-, and high-frequency ranges. Participants were not informed about the statistical contingencies between the preceding tone ‘cues’ and the target. Over the course of a stimulus block, participants responded more rapidly to reliably cued targets. This behavioral index of learned expectancy was paralleled by a negative ERP deflection, designated as a neuronal contingency response (CR), which occurred immediately prior to the onset of the target. The amplitude and latency of the CR were systematically modulated by the strength of the predictive relationship between the cue and the target. Re-averaging ERPs with respect to the latency of behavioral responses revealed no consistent relationship between the CR and the motor response, suggesting that the CR represents a neuronal signature of learned expectancy or anticipatory attention. Our results demonstrate that statistical regularities in an auditory input stream can be implicitly learned and exploited to influence behavior. Furthermore, we uncover a potential ‘prediction signal’ that reflects this fundamental learning process.
Collapse
|
8
|
Hepner CR, Nozari N. The dual origin of lexical perseverations in aphasia: Residual activation and incremental learning. Neuropsychologia 2020; 147:107603. [PMID: 32877655 DOI: 10.1016/j.neuropsychologia.2020.107603] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 08/27/2020] [Accepted: 08/27/2020] [Indexed: 11/30/2022]
Abstract
Lexical perseveration, the inappropriate repetition of a previous response, is common in aphasia. Two underlying mechanisms have been proposed: residual activation and incremental learning. Previous attempts to differentiate the two have relied on experimental paradigms that encourage semantically related errors and analysis techniques designed to detect perseverations over short distances, resulting in a bias towards detecting short-lag, semantically related perseverations that both mechanisms can account for. Two key predictions that differentiate these accounts remain untested: only residual activation can explain short-lag, semantically unrelated perseverations, whereas only incremental learning can explain long-lag, semantically related perseverations. In this paper, we used a large set of picture naming trials and a novel analysis technique to test these key predictions in a multi-session study involving six individuals with aphasia. We found clear evidence for both mechanisms in different individuals, demonstrating that either one is sufficient to cause perseveration. Importantly, perseverations due to residual activation were associated with more severely impaired systems than those due to incremental learning, suggesting that a certain degree of structural and functional integrity was necessary for incremental learning. Finally, the results supported a key prediction of the incremental learning account by showing perseverations over longer lags than have previously been reported.
Collapse
Affiliation(s)
| | - Nazbanou Nozari
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
9
|
Abstract
Speech errors are sensitive to newly learned phonotactic constraints. For example, if speakers produce strings of syllables in which /f/ is an onset if the vowel is /æ/, but a coda if the vowel is /I/, their slips will respect that constraint after a period of sleep. Constraints in which the contextual factor is nonlinguistic, however, do not appear to be learnable by this method—for example, /f/ is an onset if the speech rate is fast, but /f/ is a coda if the speech rate is slow. The present study demonstrated that adult English speakers can learn (after a sleep period) constraints based on stress (e.g., /f/ is an onset if the syllable is stressed, but /f/ is a coda if the syllable is unstressed), but cannot learn analogous constraints based on tone (e.g., /f/ is an onset if the tone is rising, but /f/ is a coda if the tone is falling). The results are consistent with the fact that, in English, stress is a relevant lexical phonological property (e.g., “INsight” and “inCITE” are different words), but tone is not (e.g., “yes!” and “yes?” are the same word, despite their different pragmatic functions). The results provide useful constraints on how consolidation effects in learning may interact with early learning experiences.
Collapse
|
10
|
Abstract
In [Nozari, N., & Hepner, C. R. (2018). To select or to wait? The importance of criterion setting in debates of competitive lexical selection. Cognitive Neuropsychology. Advance online publication. doi:10.1080/02643294.2018.1476335], we proposed a theoretical framework for reconciling two seemingly irreconcilable theories of lexical selection: competitive vs. non-competitive selection. The key point in this framework is the division of language production into two separate-albeit interacting-systems: a decision-making framework and a multi-layered system which maps meaning to sound. Technically, this can be accomplished by superimposing a signal detection model onto the distributions of conflict derived from the core dynamics of mapping semantic features to lexical representations. Based on this framework, we argued that a flexible selection criterion could accommodate patterns predicted by both competitive and non-competitive models of lexical selection. Five excellent commentaries posed various questions regarding the necessity, applicability, and scope of the proposed framework. This paper addresses those questions.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Department of Neurology, Johns Hopkins University, Baltimore, MD, USA.,Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | | |
Collapse
|
11
|
Abstract
Despite the obvious linguistic nature of typing, current psychological models of typing are, to a large extent, divorced from models of spoken language production. This gap has left unanswered many questions regarding the cognitive architecture of typing. In this article we advocate the use of a psycholinguistic framework for studying typing, by showing that such a framework could reveal important similarities and differences between spoken and typed production. Specifically, we investigated the interaction between the lexical and postlexical layers by using a phenomenon known in spoken production as the "repeated-phoneme effect." Participants typed four-word sequences of "finger-twisters" (equivalent to tongue-twisters in spoken production), in which the vowel in the last two words was either repeated (e.g., "fog top") or not (e.g., "fog tip"). We found reliably more migration errors between the consonants of the two typed words when the vowel was repeated, even after the effect of phonology was accounted for. This finding is compatible with an interactive typing system in which postlexical representations send feedback to lexical representations and reveals similar dynamics between spoken and typed production. Additional analyses showed further similarities to spoken production, such as distinct lexical and postlexical error categories, but also revealed that typing errors were much more likely than spoken errors to violate phonotactic constraints. These results provide the first demonstration of feedback between the postlexical and lexical layers in typing, and more generally demonstrate the utility of adopting a psycholinguistic framework tailored specifically to the study of typing.
Collapse
|
12
|
Abstract
OBJECTIVES Sonority is the relative perceptual prominence/loudness of speech sounds of the same length, stress, and pitch. Children with cochlear implants (CIs), with restored audibility and relatively intact temporal processing, are expected to benefit from the perceptual prominence cues of highly sonorous sounds. Sonority also influences lexical access through the sonority-sequencing principle (SSP), a grammatical phonotactic rule, which facilitates the recognition and segmentation of syllables within speech. The more nonsonorous the onset of a syllable is, the larger is the degree of sonority rise to the nucleus, and the more optimal the SSP. Children with CIs may experience hindered or delayed development of the language-learning rule SSP, as a result of their deprived/degraded auditory experience. The purpose of the study was to explore sonority's role in speech perception and lexical access of prelingually deafened children with CIs. DESIGN A case-control study with 15 children with CIs, 25 normal-hearing children (NHC), and 50 normal-hearing adults was conducted, using a lexical identification task of novel, nonreal CV-CV words taught via fast mapping. The CV-CV words were constructed according to four sonority conditions, entailing syllables with sonorous onsets/less optimal SSP (SS) and nonsonorous onsets/optimal SSP (NS) in all combinations, that is, SS-SS, SS-NS, NS-SS, and NS-NS. Outcome measures were accuracy and reaction times (RTs). A subgroup analysis of 12 children with CIs pair matched to 12 NHC on hearing age aimed to study the effect of oral-language exposure period on the sonority-related performance. RESULTS The children groups showed similar accuracy performance, overall and across all the sonority conditions. However, within-group comparisons showed that the children with CIs scored more accurately on the SS-SS condition relative to the NS-NS and NS-SS conditions, while the NHC performed equally well across all conditions. Additionally, adult-comparable accuracy performance was achieved by the children with CIs only on the SS-SS condition, as opposed to NS-SS, SS-NS, and SS-SS conditions for NHC. Accuracy analysis of the subgroups of children matched in hearing age showed similar results. Overall longer RTs were recorded by the children with CIs on the sonority-treated lexical task, specifically on the SS-SS condition compared with age-matched controls. However, the subgroup analysis showed that both groups of children did not differ on RTs. CONCLUSIONS Children with CIs performed better in lexical tasks relying on the sonority perceptual prominence cues, as in SS-SS condition, than on SSP initial relying conditions as NS-NS and NS-SS. Template-driven word learning, an early word-learning strategy, appears to play a role in the lexical access of children with CIs whether matched in hearing age or not. The SS-SS condition acts as a preferred word template. The longer RTs brought about by the highly accurate SS-SS condition in children with CIs is possibly because listening becomes more effortful. The lack of RTs difference between the children groups when matched on hearing age points out the importance of oral-language exposure period as a key factor in developing the auditory processing skills.
Collapse
|
13
|
Gaskell MG, Cairney SA, Rodd JM. Contextual priming of word meanings is stabilized over sleep. Cognition 2019; 182:109-126. [PMID: 30227332 DOI: 10.1016/j.cognition.2018.09.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 09/06/2018] [Accepted: 09/07/2018] [Indexed: 02/09/2023]
Abstract
Evidence is growing for the involvement of consolidation processes in the learning and retention of language, largely based on instances of new linguistic components (e.g., new words). Here, we assessed whether consolidation effects extend to the semantic processing of highly familiar words. The experiments were based on the word-meaning priming paradigm in which a homophone is encountered in a context that biases interpretation towards the subordinate meaning. The homophone is subsequently used in a word-association test to determine whether the priming encounter facilitates the retrieval of the primed meaning. In Experiment 1 (N = 74), we tested the resilience of priming over periods of 2 and 12 h that were spent awake or asleep, and found that sleep periods were associated with stronger subsequent priming effects. In Experiment 2 (N = 55) we tested whether the sleep benefit could be explained in terms of a lack of retroactive interference by testing participants 24 h after priming. Participants who had the priming encounter in the evening showed stronger priming effects after 24 h than participants primed in the morning, suggesting that sleep makes priming resistant to interference during the following day awake. The results suggest that consolidation effects can be found even for highly familiar linguistic materials. We interpret these findings in terms of a contextual binding account in which all language perception provides a learning opportunity, with sleep and consolidation contributing to the updating of our expectations, ready for the next day.
Collapse
|
14
|
Craig M, Ottaway G, Dewar M. Rest on it: Awake quiescence facilitates insight. Cortex 2018; 109:205-214. [PMID: 30388441 DOI: 10.1016/j.cortex.2018.09.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 08/28/2018] [Accepted: 09/13/2018] [Indexed: 11/29/2022]
Abstract
Many scientific discoveries have been explained by a sudden gaining of insight with regards to an ongoing problem. Insight is characterised by a mental restructuring of acquired information, from which new explicit knowledge can be drawn, leading to qualitative changes in behaviour. Extended sleep facilitates the gaining of insight, possibly because it is conducive to the stabilisation and restructuring of new memory representations via consolidation. Research shows that a brief period of awake quiescence (quiet resting), too, can support consolidation: people remember more new memories if they quietly rest for several minutes after encoding than if they engage in a task involving ongoing sensory input after encoding. However, it remains unknown whether awake quiescence inspires insight. Using a number-based problem-solving task (the Number Reduction Task - 'NRT'), we reveal that, like sleep, awake quiescence facilitates the rapid gaining of insight: young adults were more than twice as likely to demonstrate new explicit knowledge of a hidden solution to the NRT if initial exposure to this task was followed by 10 min of awake quiescence than an unrelated perceptual task. These findings indicate that, at least for the NRT, the development of insight is not restricted to sleep but can be achieved via a brief period of awake quiescence. Thus, contrary to conventional wisdom and theories, when faced with a novel problem we may not always need to 'sleep on it' to find a novel solution, simply 'resting on it' may be enough.
Collapse
Affiliation(s)
- Michael Craig
- Memory Lab, Department of Psychology, School of Social Sciences, Heriot Watt University, Edinburgh, United Kingdom.
| | - Georgina Ottaway
- Memory Lab, Department of Psychology, School of Social Sciences, Heriot Watt University, Edinburgh, United Kingdom
| | - Michaela Dewar
- Memory Lab, Department of Psychology, School of Social Sciences, Heriot Watt University, Edinburgh, United Kingdom
| |
Collapse
|
15
|
Alderete J, Tupper P. Phonological regularity, perceptual biases, and the role of phonotactics in speech error analysis. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 9:e1466. [PMID: 29847014 DOI: 10.1002/wcs.1466] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 04/12/2018] [Accepted: 04/27/2018] [Indexed: 11/10/2022]
Abstract
Speech errors involving manipulations of sounds tend to be phonologically regular in the sense that they obey the phonotactic rules of well-formed words. We review the empirical evidence for phonological regularity in prior research, including both categorical assessments of words and regularity at the granular level involving specific segments and contexts. Since the reporting of regularity is affected by human perceptual biases, we also document this regularity in a new data set of 2,228 sublexical errors that was collected using methods that are demonstrably less prone to bias. These facts validate the claim that sound errors are overwhelmingly regular, but the new evidence suggests speech errors admit more phonologically ill-formed words than previously thought. Detailed facts of the phonological structure of errors, including this revised standard, are then related to model assumptions in contemporary theories of phonological encoding. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Computational Models of Language Psychology > Language.
Collapse
Affiliation(s)
| | - Paul Tupper
- Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
16
|
The role of consolidation in learning context-dependent phonotactic patterns in speech and digital sequence production. Proc Natl Acad Sci U S A 2018; 115:3617-3622. [PMID: 29555766 DOI: 10.1073/pnas.1721107115] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learning is revealed in their speech errors. First-order patterns, such as "/f/ must be a syllable onset," can be distinguished from contingent, or second-order, patterns, such as "/f/ must be an onset if the vowel is /a/, but a coda if the vowel is /o/." A metaanalysis of 19 experiments clearly demonstrated that first-order patterns affect speech errors to a very great extent in a single experimental session, but second-order vowel-contingent patterns only affect errors on the second day of testing, suggesting the need for a consolidation period. Two experiments tested an analogue to these studies involving sequences of button pushes, with fingers as "consonants" and thumbs as "vowels." The button-push errors revealed two of the key speech-error findings: first-order patterns are learned quickly, but second-order thumb-contingent patterns are only strongly revealed in the errors on the second day of testing. The influence of computational complexity on the implicit learning of phonotactic patterns in speech production may be a general feature of sequence production.
Collapse
|
17
|
Goldrick M. Encoding of distributional regularities independent of markedness: Evidence from unimpaired speakers. Cogn Neuropsychol 2018; 34:476-481. [PMID: 29457555 DOI: 10.1080/02643294.2017.1421149] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Romani, Galuzzi, Guariglia, and Goslin (Comparing phoneme frequency, age of acquisition and loss in aphasia: Implications for phonological universals. Cognitive Neuropsychology) used speech error data from individuals with acquired impairments to argue that independent from articulatory complexity, within-language distributional regularities influence the processing of sound structure in speech production. Converging evidence from unimpaired speakers is reviewed, focusing on speech errors in language production. Future research should examine how articulatory and frequency factors are integrated in language processing.
Collapse
Affiliation(s)
- Matthew Goldrick
- a Department of Linguistics , Northwestern University , Evanston , IL , USA
| |
Collapse
|
18
|
Abstract
Even adults sometimes have difficulty choosing between single- and double-letter spellings, as in spinet versus spinnet. The present study examined the phonological and graphotactic factors that influence adults' use of single versus double medial consonants in the spelling of nonwords. We tested 111 adults from a community sample who varied widely in spelling ability. Better spellers were more affected than less good spellers by phonological context in that they were more likely to double consonants after short vowels and less likely to double consonants after long vowels. Although descriptions of the English writing system focus on the role of phonology in determining use of single versus double consonants, participants were also influenced by graphotactic context. There was an effect of preceding graphotactic context, such that spellers were less likely to use a double consonant when they spelled the preceding vowel with more than one letter than when they spelled it with one letter. There was also an effect of following graphotactic context, such that doubling rate varied with the letters that the participant used at the end of the nonword. These graphotactic influences did not differ significantly in strength across the range of spelling ability in our study. Discussion focuses on the role of statistical learning in the learning of spelling patterns, especially those patterns that are not explicitly taught.
Collapse
|
19
|
Silva RR, Chrobot N, Newman E, Schwarz N, Topolinski S. Make It Short and Easy: Username Complexity Determines Trustworthiness Above and Beyond Objective Reputation. Front Psychol 2017; 8:2200. [PMID: 29312062 PMCID: PMC5742175 DOI: 10.3389/fpsyg.2017.02200] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Accepted: 12/04/2017] [Indexed: 11/30/2022] Open
Abstract
Can the mere name of a seller determine his trustworthiness in the eye of the consumer? In 10 studies (total N = 608) we explored username complexity and trustworthiness of eBay seller profiles. Name complexity was manipulated through variations in username pronounceability and length. These dimensions had strong, independent effects on trustworthiness, with sellers with easy-to-pronounce or short usernames being rated as more trustworthy than sellers with difficult-to-pronounce or long usernames, respectively. Both effects were repeatedly found even when objective information about seller reputation was available. We hypothesized the effect of name complexity on trustworthiness to be based on the experience of high vs. low processing fluency, with little awareness of the underlying process. Supporting this, participants could not correct for the impact of username complexity when explicitly asked to do so. Three alternative explanations based on attributions of the variations in name complexity to seller origin (ingroup vs. outgroup), username generation method (seller personal choice vs. computer algorithm) and age of the eBay profiles (10 years vs. 1 year) were tested and ruled out. Finally, we show that manipulating the ease of reading product descriptions instead of the sellers’ names also impacts the trust ascribed to the sellers.
Collapse
Affiliation(s)
- Rita R Silva
- Social Cognition Center Cologne, University of Cologne, Cologne, Germany
| | - Nina Chrobot
- Department of Psychology, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | - Eryn Newman
- Mind and Society Center, University of Southern California, Los Angeles, CA, United States
| | - Norbert Schwarz
- Department of Psychology, University of Southern California, Los Angeles, CA, United States
| | - Sascha Topolinski
- Social Cognition Center Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
20
|
Finley S. Learning metathesis: Evidence for syllable structure constraints. JOURNAL OF MEMORY AND LANGUAGE 2017; 92:142-157. [PMID: 28082764 PMCID: PMC5222580 DOI: 10.1016/j.jml.2016.06.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
One of the major questions in the cognitive science of language is whether the perceptual and phonological motivations for the rules and patterns that govern the sounds of language are a part of the psychological reality of grammatical representations. This question is particularly important in the study of phonological patterns - systematic constraints on the representation of sounds, because phonological patterns tend to be grounded in phonetic constraints. This paper focuses on phonological metathesis, which occurs when two adjacent sounds switch positions (e.g., cast pronounced as cats ). While many cases of phonological metathesis appear to be motivated by constraints on syllable structure, it is possible that these metathesis patterns are merely artifacts of historical change, and do not represent the linguistic knowledge of the speaker (Blevins & Garrett, 1998). Participants who were exposed to a metathesis pattern that can be explained in terms of structural or perceptual improvement were less likely to generalize to metathesis patterns that did not show the same improvements. These results support a substantively biased theory in which phonological patterns are encoded in terms of structurally motivated constraints.
Collapse
Affiliation(s)
- Sara Finley
- Address: Department of Psychology, Pacific Lutheran University, 12180 Park Ave S, Tacoma, WA 98447, United States.
| |
Collapse
|
21
|
Montag JL, Matsuki K, Kim JY, MacDonald MC. Language Specific and Language General Motivations of Production Choices: A Multi-Clause and Multi-Language Investigation. COLLABRA: PSYCHOLOGY 2017. [DOI: 10.1525/collabra.94] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Cross-linguistic studies allow for analyses that would be impossible in a single language. To better understand the factors that underlie sentence production, we investigated production choices in main and relative clause production tasks in three languages: English, Japanese and Korean. The effects of both non-linguistic attributes (such as conceptual animacy) and language specific properties (such as word order) were investigated. Japanese and Korean are structurally similar to each other but different from English, which allowed for an investigation of the production consequences of non-linguistic attributes in different typological or word order contexts (when Japanese and Korean speakers make similar production choices that are unlike those of English speakers), as well as production choices that differ despite typological similarity (when Japanese and Korean speakers make different choices). Speakers of all three languages produced more passive utterances when describing animate entities, but the overall rate of passives varied by task and language. Further, the sets of items that were most likely to elicit passives varied by language, with Japanese and Korean speakers more likely to produce passives when patients were adversely affected by the depicted event. These results suggest a number of factors that contribute to language production choices across three languages, and how general cognitive constraints on sentence production may interact with the structure of a specific language.
Collapse
Affiliation(s)
- Jessica L. Montag
- Department of Psychology, University of California, Riverside, Riverside, California, US
| | - Kazunaga Matsuki
- Department of Linguistics and Languages, McMaster University Hamilton, ON, CA
| | - Jae Yun Kim
- The Fuqua School of Business, Duke University, Durham, NC, US
| | | |
Collapse
|
22
|
White KS, Chambers KE, Miller Z, Jethava V. Listeners learn phonotactic patterns conditioned on suprasegmental cues. Q J Exp Psychol (Hove) 2016; 70:2560-2576. [PMID: 27734753 DOI: 10.1080/17470218.2016.1247896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Language learners are sensitive to phonotactic patterns from an early age, and can acquire both simple and 2nd-order positional restrictions contingent on segment identity (e.g., /f/ is an onset with /æ/but a coda with /ɪ/). The present study explored the learning of phonototactic patterns conditioned on a suprasegmental cue: lexical stress. Adults first heard non-words in which trochaic and iambic items had different consonant restrictions. In Experiment 1, participants trained with phonotactic patterns involving natural classes of consonants later falsely recognized novel items that were consistent with the training patterns (legal items), demonstrating that they had learned the stress-conditioned phonotactic patterns. However, this was only true for iambic items. In Experiment 2, participants completed a forced-choice test between novel legal and novel illegal items and were again successful only for the iambic items. Experiment 3 demonstrated learning for trochaic items when they were presented alone. Finally, in Experiment 4, in which the training phase was lengthened, participants successfully learned both sets of phonotactic patterns. These experiments provide evidence that learners consider more global phonological properties in the computation of phonotactic patterns, and that learners can acquire multiple sets of patterns simultaneously, even contradictory ones.
Collapse
Affiliation(s)
- Katherine S White
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | - Kyle E Chambers
- b Department of Psychology , Gustavus Adolphus College , St Peter , MN , USA
| | - Zachary Miller
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| | - Vibhuti Jethava
- a Department of Psychology , University of Waterloo , Waterloo , ON , Canada
| |
Collapse
|
23
|
Abstract
Speakers sometimes encounter utterances that have anomalous linguistic features. Are such features registered during comprehension and transferred to speakers' production systems? In two experiments, we explored these questions. In a syntactic-priming paradigm, speakers heard prime sentences with novel or intransitive verbs as part of prepositional-dative or double-object structures (e.g., The chef munded the cup to the burglar or The doctor existed the pirate the balloon). Speakers then described target pictures eliciting the same structures, using the same or different novel or intransitive verbs. Speakers overall described targets with the same structures as the primes (abstract syntactic priming), but more so when the primes and targets had the same novel or intransitive verbs (a lexical boost), an effect that was only observed when the novel words served as the verbs in both the prime and target sentences. Such a lexical boost could only manifest if speakers formed associations between the verbs and structures in the primes during comprehension, and if these associations were then transferred to their production systems. We thus showed that anomalous utterance features are not ignored but persist (at least) in speakers' immediately subsequent production.
Collapse
|
24
|
Gagliardi A, Feldman NH, Lidz J. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior. Cogn Sci 2016; 41:188-217. [PMID: 27245747 DOI: 10.1111/cogs.12373] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2013] [Revised: 09/23/2015] [Accepted: 09/24/2015] [Indexed: 11/28/2022]
Abstract
Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the hypothesis that children are classifying nouns optimally with respect to a distribution that does not match the surface distribution of statistical features in their input. We propose three ways in which children's apparent statistical insensitivity might arise, and find that all three provide ways to account for the difference between children's behavior and the optimal classifier. A fourth model combines two of these proposals and finds that children's insensitivity is best modeled as a bias to ignore certain features during classification, rather than an inability to encode those features during learning. These results provide insight into children's developing knowledge of noun classes and highlight the complex ways in which statistical information from the input interacts with children's learning processes.
Collapse
Affiliation(s)
| | - Naomi H Feldman
- Department of Linguistics, University of Maryland.,Institute for Advanced Computer Studies, University of Maryland
| | - Jeffrey Lidz
- Department of Linguistics, University of Maryland
| |
Collapse
|
25
|
Kleinschmidt DF, Jaeger TF. Re-examining selective adaptation: Fatiguing feature detectors, or distributional learning? Psychon Bull Rev 2016; 23:678-91. [PMID: 26438255 PMCID: PMC4821823 DOI: 10.3758/s13423-015-0943-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When a listener hears many good examples of a /b/ in a row, they are less likely to classify other sounds on, e.g., a /b/-to-/d/ continuum as /b/. This phenomenon is known as selective adaptation and is a well-studied property of speech perception. Traditionally, selective adaptation is seen as a mechanistic property of the speech perception system, and attributed to fatigue in acoustic-phonetic feature detectors. However, recent developments in our understanding of non-linguistic sensory adaptation and higher-level adaptive plasticity in speech perception and language comprehension suggest that it is time to re-visit the phenomenon of selective adaptation. We argue that selective adaptation is better thought of as a computational property of the speech perception system. Drawing on a common thread in recent work on both non-linguistic sensory adaptation and plasticity in language comprehension, we furthermore propose that selective adaptation can be seen as a consequence of distributional learning across multiple levels of representation. This proposal opens up new questions for research on selective adaptation itself, and also suggests that selective adaptation can be an important bridge between work on adaptation in low-level sensory systems and the complicated plasticity of the adult language comprehension system.
Collapse
Affiliation(s)
- Dave F Kleinschmidt
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
| | - T Florian Jaeger
- Departments of Brain and Cognitive Sciences, Computer Science, and Linguistics, University of Rochester, Rochester, NY, USA
| |
Collapse
|
26
|
Warker JA, Dell GS. New phonotactic constraints learned implicitly by producing syllable strings generalize to the production of new syllables. J Exp Psychol Learn Mem Cogn 2015; 41:1902-10. [PMID: 26030628 DOI: 10.1037/xlm0000143] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Novel phonotactic constraints can be acquired by hearing or speaking syllables that follow a novel constraint. When learned from hearing syllables, these newly learned constraints generalize to syllables that were not experienced during training. However, generalization of phonotactic learning to novel syllables has never been persuasively demonstrated in production. The typical production experiment examines phonotactic learning through speech errors. After participants repeat syllable sequences embedded with a novel phonotactic constraint, such as /f/ appearing only in onset position, their speech errors come to adhere to the novel constraint. For example, when participants mistakenly move an /f/ to another syllable, it overwhelmingly moves to an onset rather than a coda position. We assessed whether constraints learned and measured in this manner generalize to unexperienced syllables and, at the same time, whether the slips tend to create previously experienced syllables (a syllable priming effect). We found evidence of generalization but not of syllable priming in participants' speech errors. The effect of phonotactic learning was as powerfully expressed during the production of unexperienced as experienced syllables. A connectionist model simulated the experimental results using a single learning mechanism and successfully reproduced the constraint learning, generalization, and lack of priming.
Collapse
Affiliation(s)
| | - Gary S Dell
- Beckman Institute, University of Illinois at Urbana-Champaign
| |
Collapse
|
27
|
Kleinschmidt DF, Jaeger TF. Robust speech perception: recognize the familiar, generalize to the similar, and adapt to the novel. Psychol Rev 2015; 122:148-203. [PMID: 25844873 PMCID: PMC4744792 DOI: 10.1037/a0038695] [Citation(s) in RCA: 239] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker's /p/ might be physically indistinguishable from another talker's /b/ (cf. lack of invariance). We characterize the computational problem posed by such a subjectively nonstationary world and propose that the speech perception system overcomes this challenge by (a) recognizing previously encountered situations, (b) generalizing to other situations based on previous similar experience, and (c) adapting to novel situations. We formalize this proposal in the ideal adapter framework: (a) to (c) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on 2 critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires that listeners learn to represent the structured component of cross-situation variability in the speech signal. We discuss how these 2 aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension.
Collapse
Affiliation(s)
| | - T Florian Jaeger
- Departments of Brain and Cognitive Sciences, Computer Science, and Linguistics, University of Rochester
| |
Collapse
|
28
|
Gaskell MG, Warker J, Lindsay S, Frost R, Guest J, Snowdon R, Stackhouse A. Sleep underpins the plasticity of language production. Psychol Sci 2014; 25:1457-65. [PMID: 24894583 DOI: 10.1177/0956797614535937] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Accepted: 04/04/2014] [Indexed: 11/15/2022] Open
Abstract
The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.
Collapse
|
29
|
Romberg AR, Saffran JR. All together now: concurrent learning of multiple structures in an artificial language. Cogn Sci 2013; 37:1290-320. [PMID: 23772795 PMCID: PMC3769465 DOI: 10.1111/cogs.12050] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2011] [Revised: 09/23/2012] [Accepted: 09/26/2012] [Indexed: 11/30/2022]
Abstract
Natural languages contain many layers of sequential structure, from the distribution of phonemes within words to the distribution of phrases within utterances. However, most research modeling language acquisition using artificial languages has focused on only one type of distributional structure at a time. In two experiments, we investigated adult learning of an artificial language that contains dependencies between both adjacent and non-adjacent words. We found that learners rapidly acquired both types of regularities and that the strength of the adjacent statistics influenced learning of both adjacent and non-adjacent dependencies. Additionally, though accuracy was similar for both types of structure, participants' knowledge of the deterministic non-adjacent dependencies was more explicit than their knowledge of the probabilistic adjacent dependencies. The results are discussed in the context of current theories of statistical learning and language acquisition.
Collapse
Affiliation(s)
- Alexa R. Romberg
- Department of Psychology and Waisman Center, University of Wisconsin – Madison
| | - Jenny R. Saffran
- Department of Psychology and Waisman Center, University of Wisconsin – Madison
| |
Collapse
|
30
|
Abstract
We welcome the proposal to use forward models to understand predictive processes in language processing. However, Pickering & Garrod (P&G) miss the opportunity to provide a strong framework for future work. Forward models need to be pursued in the context of learning. This naturally leads to questions about what prediction error these models aim to minimize.
Collapse
|
31
|
Alignment as a consequence of expectation adaptation: syntactic priming is affected by the prime's prediction error given both prior and recent experience. Cognition 2013; 127:57-83. [PMID: 23354056 DOI: 10.1016/j.cognition.2012.10.013] [Citation(s) in RCA: 126] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2010] [Revised: 10/05/2012] [Accepted: 10/22/2012] [Indexed: 11/20/2022]
Abstract
Speakers show a remarkable tendency to align their productions with their interlocutors'. Focusing on sentence production, we investigate the cognitive systems underlying such alignment (syntactic priming). Our guiding hypothesis is that syntactic priming is a consequence of a language processing system that is organized to achieve efficient communication in an ever-changing (subjectively non-stationary) environment. We build on recent work suggesting that comprehenders adapt to the statistics of the current environment. If such adaptation is rational or near-rational, the extent to which speakers adapt their expectations for a syntactic structure after processing a prime sentence should be sensitive to the prediction error experienced while processing the prime. This prediction is shared by certain error-based implicit learning accounts, but not by most other accounts of syntactic priming. In three studies, we test this prediction against data from conversational speech, speech during picture description, and written production during sentence completion. All three studies find stronger syntactic priming for primes associated with a larger prediction error (primes with higher syntactic surprisal). We find that the relevant prediction error is sensitive to both prior and recent experience within the experiment. Together with other findings, this supports accounts that attribute syntactic priming to expectation adaptation.
Collapse
|
32
|
Koo H, Callahan L. Tier-adjacency is not a necessary condition for learning phonotactic dependencies. ACTA ACUST UNITED AC 2012. [DOI: 10.1080/01690965.2011.603933] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
33
|
Thiessen ED, Pavlik PI. iMinerva: a mathematical model of distributional statistical learning. Cogn Sci 2012; 37:310-43. [PMID: 23126517 DOI: 10.1111/cogs.12011] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non-adjacent relations. On the surface, these different aspects share few commonalities. Despite this, we demonstrate that the same computational framework can account for learning in all of these tasks. These results support two conclusions. The first is that much, and perhaps all, of distributional statistical learning can be explained by the same underlying set of processes. The second is that some aspects of language can be learned due to domain-general characteristics of memory.
Collapse
Affiliation(s)
- Erik D Thiessen
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | | |
Collapse
|
34
|
Kutta TJ, Kaschak MP. Changes in task-extrinsic context do not affect the persistence of long-term cumulative structural priming. Acta Psychol (Amst) 2012; 141:408-14. [PMID: 23103416 DOI: 10.1016/j.actpsy.2012.09.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 09/06/2012] [Accepted: 09/07/2012] [Indexed: 10/27/2022] Open
Abstract
We present two experiments exploring the role of extrinsic memory factors (i.e., factors that are extrinsic to the primary task that is being performed) and intrinsic memory factors (i.e., factors that are intrinsic to the primary task being completed) in the persistence of cumulative structural priming effects. Participants completed a two-phase experiment, where the first phase established a bias toward producing either the double object or prepositional object construction, and the second phase assessed the effects of this bias. Extrinsic memory factors were manipulated by having participants complete the two phases of the study in the same or different locations (physical context change) or while watching the same or different videos (video context change). Participants completed the second phase of the study 10 min after the first phase of the study in Experiment 1, and after a delay of 1 week in Experiment 2. Results suggest that the observed structural priming effects were not affected by manipulations of extrinsic memory factors. These data suggest that explicit memory does not play a large role in the long-term persistence of cumulative structural priming effects.
Collapse
|
35
|
Warker JA. Investigating the retention and time course of phonotactic constraint learning from production experience. J Exp Psychol Learn Mem Cogn 2012; 39:96-109. [PMID: 22686839 DOI: 10.1037/a0028648] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Adults can rapidly learn artificial phonotactic constraints such as /f/ occurs only at the beginning of syllables by producing syllables that contain those constraints. This implicit learning is then reflected in their speech errors. However, second-order constraints in which the placement of a phoneme depends on another characteristic of the syllable (e.g., if the vowel is /æ/, /f/ occurs at the beginning of syllables and /s/ occurs at the end of syllables, but if the vowel is /I/, the reverse is true) require a longer learning period. Two experiments investigated the transience of second-order learning and whether consolidation plays a role in learning phonological dependencies, with speech errors used as a measure of learning. Experiment 1 tested the durability of learning and found that learning was still present in speech errors a week later. Experiment 2 looked at whether more time in the form of a consolidation period or more experience in the form of more trials was necessary for learning to be revealed in speech errors. Both consolidation and more trials led to learning; however, consolidation provided a more substantial benefit.
Collapse
Affiliation(s)
- Jill A Warker
- University of Scranton, Department of Psychology, 800 Linden Street, Scranton, PA 18510, USA.
| |
Collapse
|
36
|
Abstract
Cognitive rehabilitation research is increasingly exploring errorless learning interventions, which prioritise the avoidance of errors during treatment. The errorless learning approach was originally developed for patients with severe anterograde amnesia, who were deemed to be at particular risk for error learning. Errorless learning has since been investigated in other memory-impaired populations (e.g., Alzheimer's disease) and acquired aphasia. In typical errorless training, target information is presented to the participant for study or immediate reproduction, a method that prevents participants from attempting to retrieve target information from long-term memory (i.e., retrieval practice). However, assuring error elimination by preventing difficult (and error-permitting) retrieval practice is a potential major drawback of the errorless approach. This review begins with discussion of research in the psychology of learning and memory that demonstrates the importance of difficult (and potentially errorful) retrieval practice for robust learning and prolonged performance gains. We then review treatment research comparing errorless and errorful methods in amnesia and aphasia, where only the latter provides (difficult) retrieval practice opportunities. In each clinical domain we find the advantage of the errorless approach is limited and may be offset by the therapeutic potential of retrieval practice. Gaps in current knowledge are identified that preclude strong conclusions regarding a preference for errorless treatments over methods that prioritise difficult retrieval practice. We offer recommendations for future research aimed at a strong test of errorless learning treatments, which involves direct comparison with methods where retrieval practice effects are maximised for long-term gains.
Collapse
|
37
|
Chambers KE, Onishi KH, Fisher C. Representations for phonotactic learning in infancy. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2011; 7:287-308. [PMID: 22511851 PMCID: PMC3326355 DOI: 10.1080/15475441.2011.580447] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Infants rapidly learn novel phonotactic constraints from brief listening experience. Four experiments explored the nature of the representations underlying this learning. 16.5- and 10.5-month-old infants heard training syllables in which particular consonants were restricted to particular syllable positions (first-order constraints) or to syllable positions depending on the identity of the adjacent vowel (second-order constraints). Later, in a headturn listening-preference task, infants were presented with new syllables that either followed the experimental constraints or violated them. Infants at both ages learned first- and second-order constraints on consonant position (Experiments 1 and 2) but found second-order constraints more difficult to learn (Experiment 2). Infants also spontaneously generalized first-order constraints to syllables containing a new, transfer vowel; they did so whether the transfer vowel was similar to the familiarization vowels (Experiment 3), or dissimilar from them (Experiment 4). These findings suggest that infants recruit representations of individuated segments during phonological learning. Furthermore, like adults, they represent phonological sequences in a flexible manner that allows them to detect patterns at multiple levels of phonological analysis.
Collapse
|
38
|
Nozari N, Dell GS, Schwartz MF. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production. Cogn Psychol 2011; 63:1-33. [PMID: 21652015 DOI: 10.1016/j.cogpsych.2011.05.001] [Citation(s) in RCA: 118] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2011] [Accepted: 05/13/2011] [Indexed: 11/17/2022]
Abstract
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients' error-detection ability and the model's characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Beckman Institute, University of Illinois at Urbana-Champaign, 405 N. Matthews Ave., Urbana, IL 61801, USA.
| | | | | |
Collapse
|
39
|
Abstract
The repetition and the predictability of a word in a conversation are two factors that are believed to affect whether it is emphasized: predictable, repeated words are less acoustically prominent than unpredictable, new words. However, because predictability and repetition are correlated, it is unclear whether speakers lengthen unpredictable words to facilitate comprehension or whether this lengthening is the result of difficulties in accessing a new (nonrepeated) lexical item. In this study, we investigated the relationship between acoustic prominence, repetition, and predictability in a description task. In Experiment 1, we found that repeated referents are produced with reduced prominence, even when these referents are unexpected. In Experiment 2, we found that predictability and repetition both have independent effects on duration and intensity. However, word duration was primarily determined by repetition, and intensity was primarily determined by predictability. The results are most consistent with an account in which multiple cognitive factors influence the acoustic prominence of a word.
Collapse
|
40
|
Humphreys KR, Menzies H, Lake JK. Repeated speech errors: evidence for learning. Cognition 2011; 117:151-65. [PMID: 20801433 DOI: 10.1016/j.cognition.2010.08.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2009] [Revised: 05/24/2010] [Accepted: 08/07/2010] [Indexed: 11/26/2022]
Abstract
Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience.
Collapse
Affiliation(s)
- Karin R Humphreys
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, Ontario, Canada.
| | | | | |
Collapse
|
41
|
Chambers KE, Onishi KH, Fisher C. A vowel is a vowel: generalizing newly learned phonotactic constraints to new contexts. J Exp Psychol Learn Mem Cogn 2010; 36:821-8. [PMID: 20438279 DOI: 10.1037/a0018991] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Adults can learn novel phonotactic constraints from brief listening experience. We investigated the representations underlying phonotactic learning by testing generalization to syllables containing new vowels. Adults heard consonant-vowel-consonant study syllables in which particular consonants were artificially restricted to the onset or coda position (e.g., /f/ is an onset, /s/ is a coda). Subjects were quicker to repeat novel constraint-following (legal) than constraint-violating (illegal) test syllables whether they contained a vowel used in the study syllables (training vowel) or a new (transfer) vowel. This effect emerged regardless of the acoustic similarity between training and transfer vowels. Listeners thus learned and generalized phonotactic constraints that can be characterized as simple first-order constraints on consonant position. Rapid generalization independent of vowel context provides evidence that vowels and consonants are represented independently by processes underlying phonotactic learning.
Collapse
Affiliation(s)
- Kyle E Chambers
- Department of Psychology, Gustavus Adolphus College, Saint Peter, MN 56082, USA.
| | | | | |
Collapse
|
42
|
Goldrick M, Folk JR, Rapp B. Mrs. Malaprop's Neighborhood: Using Word Errors to Reveal Neighborhood Structure. JOURNAL OF MEMORY AND LANGUAGE 2010; 62:113-134. [PMID: 20161591 PMCID: PMC2808630 DOI: 10.1016/j.jml.2009.11.008] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Many theories of language production and perception assume that in the normal course of processing a word, additional non-target words (lexical neighbors) become active. The properties of these neighbors can provide insight into the structure of representations and processing mechanisms in the language processing system. To infer the properties of neighbors, we examined the non-semantic errors produced in both spoken and written word production by four individuals who suffered neurological injury. Using converging evidence from multiple language tasks, we first demonstrate that the errors originate in disruption to the processes involved in the retrieval of word form representations from long-term memory. The targets and errors produced were then examined for their similarity along a number of dimensions. A novel statistical simulation procedure was developed to determine the significance of the observed similarities between targets and errors relative to multiple chance baselines. The results reveal that in addition to position-specific form overlap (the only consistent claim of traditional definitions of neighborhood structure) the dimensions of lexical frequency, grammatical category, target length and initial segment independently contribute to the activation of non-target words in both spoken and written production. Additional analyses confirm the relevance of these dimensions for word production showing that, in both written and spoken modalities, the retrieval of a target word is facilitated by increasing neighborhood density, as defined by the results of the target-error analyses.
Collapse
Affiliation(s)
- Matthew Goldrick
- Department of Cognitive Science, Johns Hopkins University
- Department of Linguistics, Northwestern University
| | | | - Brenda Rapp
- Department of Cognitive Science, Johns Hopkins University
| |
Collapse
|
43
|
Abstract
Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on the previous day engaged in lexical competition with similar-sounding existing words. Additionally, only novel words learned on the previous day exhibited faster repetition latencies relative to unfamiliar controls. This overnight consolidation effect was further examined using fMRI to compare neural responses to existing and novel words learned on different days prior to scanning (Experiment 2). This revealed an elevated response for novel compared with existing words in left superior temporal gyrus (STG), inferior frontal and premotor regions, and right cerebellum. Cortical activation was of equivalent magnitude for unfamiliar novel words and items learned on the day of scanning but significantly reduced for novel words learned on the previous day. In contrast, hippocampal responses were elevated for novel words that were entirely unfamiliar, and this elevated response correlated with postscanning behavioral measures of word learning. These findings are consistent with a dual-learning system account in which there is a division of labor between medial-temporal systems that are involved in initial acquisition and neocortical systems in which representations of novel spoken words are subject to overnight consolidation.
Collapse
|
44
|
Acheson DJ, MacDonald MC. Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychol Bull 2009; 135:50-68. [PMID: 19210053 PMCID: PMC3000524 DOI: 10.1037/a0014411] [Citation(s) in RCA: 154] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information.
Collapse
Affiliation(s)
- Daniel J Acheson
- Department of Psychology, University of Wisconsin, Madison, WI 53706, USA
| | | |
Collapse
|
45
|
Warker JA, Dell GS, Whalen CA, Gereg S. Limits on learning phonotactic constraints from recent production experience. J Exp Psychol Learn Mem Cogn 2008; 34:1289-95. [PMID: 18763905 DOI: 10.1037/a0013033] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Adults can learn new artificial phonotactic constraints by producing syllables that exhibit the constraints. The experiments presented here tested the limits of phonotactic learning in production using speech errors as an implicit measure of learning. Experiment 1 tested a constraint in which the placement of a consonant as an onset or coda depended on the identity of a nonadjacent consonant. Participant speech errors reflected knowledge of the constraint but not until the 2nd day of testing. Experiment 2 tested a constraint in which consonant placement depended on an extralinguistic factor, the speech rate. Participants were not able to learn this constraint. Together, these experiments suggest that phonotactic-like constraints are acquired when mutually constraining elements reside within the phonological system.
Collapse
Affiliation(s)
- Jill A Warker
- Department of Psychology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
| | | | | | | |
Collapse
|
46
|
Finn AS, Hudson Kam CL. The curse of knowledge: first language knowledge impairs adult learners' use of novel statistics for word segmentation. Cognition 2008; 108:477-99. [PMID: 18533142 DOI: 10.1016/j.cognition.2008.04.002] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2006] [Revised: 03/23/2008] [Accepted: 04/15/2008] [Indexed: 11/28/2022]
Abstract
We investigated whether adult learners' knowledge of phonotactic restrictions on word forms from their first language impacts their ability to use statistical information to segment words in a novel language. Adults were exposed to a speech stream where English phonotactics and phoneme co-occurrence information conflicted. A control where these did not conflict was also run. Participants chose between words defined by novel statistics and words that are phonotactically possible in English, but had much lower phoneme contingencies. Control participants selected words defined by statistics while experimental participants did not. This result held up with increases in exposure and when segmentation was aided by telling participants a word prior to exposure. It was not the case that participants simply preferred English-sounding words, however, when the stimuli contained very short pauses, participants were able to learn the novel words despite the fact that they violated English phonotactics. Results suggest that prior linguistic knowledge can interfere with learners' abilities to segment words from running speech using purely statistical cues at initial exposure.
Collapse
Affiliation(s)
- Amy S Finn
- Department of Psychology, University of California Berkeley, Berkeley, CA 94720-1650, USA.
| | | |
Collapse
|