1
|
Kutlu E, Klein-Packard J, Jeppsen C, Tomblin JB, McMurray B. The development of real-time spoken and word recognition derives from changes in ability, not maturation. Cognition 2024; 251:105899. [PMID: 39059118 PMCID: PMC11470444 DOI: 10.1016/j.cognition.2024.105899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024]
Abstract
In typical adults, recognizing both spoken and written words is thought to be served by a process of competition between candidates in the lexicon. In recent years, work has used eye-tracking in the visual world paradigm to characterize this competition process over development. It has shown that both spoken and written word recognition continue to develop through adolescence (Rigler et al., 2015). It is still unclear what drives these changes in real-time word recognition over the school years, as there are dramatic changes in language, the onset of reading instruction, and gains in domain general function during this time. This study began to address these issues by asking whether changes in real-time word recognition derive from changes in overall language and reading ability or reflect more general age-related development. This cross-sectional study examined 278 school-age children (Grades 1-3) using the Visual World Paradigm to assess both spoken and written word recognition, along with multiple measures of language, reading and phonology. A structural equation model applied to these ability measures found three factors representing language, reading, and phonology. Multiple regression analyses were used to understand how these three factors relate to real-time spoken and written word recognition as well as a non-linguistic variant of the VWP intended to capture decision speed, eye-movement factors, and other non-language/reading differences. We found that for both spoken and written word recognition, the speed of activating target words in both domains was more closely tied to the relevant ability (e.g., reading for written word recognition) than was age. We also examined competition resolution (how fully competitors were suppressed late in processing). Here, spoken word recognition showed only small, developmental effects that were only related to phonological processing, suggesting links to developmental language disorder. However, in written word recognition, competitor resolution showed large impacts of development which were strongly linked to reading. This suggests the dimensionality of real-time lexical processing may differ across domains. Importantly, neither spoken nor written word recognition is fully described by changes in non-linguistic skills assessed with non-linguistic VWP, and the non-linguistic VWP was linked to differences in language and reading. These findings suggest that spoken and written word recognition continue past the first year of life and are mostly driven by ability and not only by overall maturation.
Collapse
Affiliation(s)
- Ethan Kutlu
- Department of Linguistics, University of Iowa, Iowa City, 52242, USA; Department of Psychological & Brain Sciences, University of Iowa, Iowa City, 52242, USA.
| | - Jamie Klein-Packard
- Department of Psychological & Brain Sciences, University of Iowa, Iowa City, 52242, USA.
| | - Charlotte Jeppsen
- Department of Psychological & Brain Sciences, University of Iowa, Iowa City, 52242, USA.
| | - J Bruce Tomblin
- Department of Communication Sciences & Disorders, University of Iowa, Iowa City, 52242, USA.
| | - Bob McMurray
- Department of Psychological & Brain Sciences, University of Iowa, Iowa City, 52242, USA; Department of Communication Sciences & Disorders, University of Iowa, Iowa City, 52242, USA; Department of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, 52242, USA.
| |
Collapse
|
2
|
McMurray B, Smith FX, Huffman M, Rooff K, Muegge JB, Jeppsen C, Kutlu E, Colby S. Underlying dimensions of real-time word recognition in cochlear implant users. Nat Commun 2024; 15:7382. [PMID: 39209837 PMCID: PMC11362525 DOI: 10.1038/s41467-024-51514-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 08/08/2024] [Indexed: 09/04/2024] Open
Abstract
Word recognition is a gateway to language, linking sound to meaning. Prior work has characterized its cognitive mechanisms as a form of competition between similar-sounding words. However, it has not identified dimensions along which this competition varies across people. We sought to identify these dimensions in a population of cochlear implant users with heterogenous backgrounds and audiological profiles, and in a lifespan sample of people without hearing loss. Our study characterizes the process of lexical competition using the Visual World Paradigm. A principal component analysis reveals that people's ability to resolve lexical competition varies along three dimensions that mirror prior small-scale studies. These dimensions capture the degree to which lexical access is delayed ("Wait-and-See"), the degree to which competition fully resolves ("Sustained-Activation"), and the overall rate of activation. Each dimension is predicted by a different auditory skills and demographic factors (onset of deafness, age, cochlear implant experience). Moreover, each dimension predicts outcomes (speech perception in quiet and noise, subjective listening success) over and above auditory fidelity. Higher degrees of Wait-and-See and Sustained-Activation predict poorer outcomes. These results suggest the mechanisms of word recognition vary along a few underlying dimensions which help explain variable performance among listeners encountering auditory challenge.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA.
- Dept. of Communication Sciences & Disorders, University of Iowa, Iowa City, IA, USA.
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA.
- Dept. of Linguistics, University of Iowa, Iowa City, IA, USA.
| | - Francis X Smith
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Communication Sciences & Disorders, University of Iowa, Iowa City, IA, USA
| | - Marissa Huffman
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| | - Kristin Rooff
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| | - John B Muegge
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Charlotte Jeppsen
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Ethan Kutlu
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Linguistics, University of Iowa, Iowa City, IA, USA
| | - Sarah Colby
- Dept. of Psychological & Brain Sciences, University of Iowa, Iowa City, IA, USA
- Dept. of Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
3
|
Levari T, Snedeker J. Understanding words in context: A naturalistic EEG study of children's lexical processing. JOURNAL OF MEMORY AND LANGUAGE 2024; 137:104512. [PMID: 38855737 PMCID: PMC11160963 DOI: 10.1016/j.jml.2024.104512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
When listening to speech, adults rely on context to anticipate upcoming words. Evidence for this comes from studies demonstrating that the N400, an event-related potential (ERP) that indexes ease of lexical-semantic processing, is influenced by the predictability of a word in context. We know far less about the role of context in children's speech comprehension. The present study explored lexical processing in adults and 5-10-year-old children as they listened to a story. ERPs time-locked to the onset of every word were recorded. Each content word was coded for frequency, semantic association, and predictability. In both children and adults, N400s reflect word predictability, even when controlling for frequency and semantic association. These findings suggest that both adults and children use top-down constraints from context to anticipate upcoming words when listening to stories.
Collapse
Affiliation(s)
- Tatyana Levari
- Department of Psychology, Harvard University, United States
| | - Jesse Snedeker
- Department of Psychology, Harvard University, United States
| |
Collapse
|
4
|
Guerra E, Coloma CJ, Helo A. Lexical-semantic processing in preschoolers with Developmental Language Disorder: an eye tracking study. Front Psychol 2024; 15:1338517. [PMID: 38807960 PMCID: PMC11131166 DOI: 10.3389/fpsyg.2024.1338517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 04/29/2024] [Indexed: 05/30/2024] Open
Abstract
This study examined lexical-semantic processing in children with Developmental Language Disorder (DLD) during visually situated comprehension of real-time spoken words. Existing evidence suggests that children with DLD may experience challenges in lexical access and retrieval, as well as greater lexical competition compared to their peers with Typical Development (TD). However, the specific nature of these difficulties remains unclear. Using eye-tracking methodology, the study investigated the real-time comprehension of semantic relationships in children with DLD and their age-matched peers. The results revealed that, for relatively frequent nouns, both groups demonstrated similar comprehension of semantic relationships. Both groups favored the semantic competitor when it appeared with an unrelated visual referent. In turn, when the semantic competitor appeared with the visual referent of the spoken word, both groups disregarded the competitor. This finding shows that, although children with DLD usually present a relatively impoverished vocabulary, frequent nouns may not pose greater difficulties for them. While the temporal course of preference for the competitor or the referent was similar between the two groups, numerical, though non-significant, differences in the extension of the clusters were observed. In summary, this research demonstrates that monolingual preschoolers with DLD exhibit similar lexical access to frequent words compared to their peers with TD. Future studies should investigate the performance of children with DLD on less frequent words to provide a comprehensive understanding of their lexical-semantic abilities.
Collapse
Affiliation(s)
- Ernesto Guerra
- Centro de Investigación Avanzada en Educación, Instituto de Educación, Universidad de Chile, Santiago, Chile
| | - Carmen Julia Coloma
- Centro de Investigación Avanzada en Educación, Instituto de Educación, Universidad de Chile, Santiago, Chile
- Departamento de Fonoaudiología, Universidad de Chile, Santiago, Chile
| | - Andrea Helo
- Centro de Investigación Avanzada en Educación, Instituto de Educación, Universidad de Chile, Santiago, Chile
- Departamento de Fonoaudiología, Universidad de Chile, Santiago, Chile
- Departamento de Neurociencias, Universidad de Chile, Santiago, Chile
| |
Collapse
|
5
|
Jeppsen C, Baxelbaum K, Tomblin B, Klein K, McMurray B. The development of lexical processing: Real-time phonological competition and semantic activation in school age children. Q J Exp Psychol (Hove) 2024:17470218241244799. [PMID: 38508999 DOI: 10.1177/17470218241244799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2024]
Abstract
Prior research suggests that the development of speech perception and word recognition stabilises in early childhood. However, recent work suggests that development of these processes continues throughout adolescence. This study aimed to investigate whether these developmental changes are based solely within the lexical system or are due to domain general changes, and to extend this investigation to lexical-semantic processing. We used two Visual World Paradigm tasks: one to examine phonological and semantic processing, one to capture non-linguistic domain-general skills. We tested 43 seven- to nine-year-olds, 42 ten- to thirteen-year-olds, and 30 sixteen- to seventeen-year-olds. Older children were quicker to fixate the target word and exhibited earlier onset and offset of fixations to both semantic and phonological competitors. Visual/cognitive skills explained significant, but not all, variance in the development of these effects. Developmental changes in semantic activation were largely attributable to changes in upstream phonological processing. These results suggest that the concurrent development of linguistic processes and broader visual/cognitive skills lead to developmental changes in real-time phonological competition, while semantic activation is more stable across these ages.
Collapse
Affiliation(s)
- Charlotte Jeppsen
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, USA
| | - Keith Baxelbaum
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, USA
| | - Bruce Tomblin
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA, USA
| | - Kelsey Klein
- Department of Audiology and Speech Pathology, The University of Tennessee Health Science Center, Memphis, TN, USA
| | - Bob McMurray
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, USA
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
6
|
Hintz F, Shkaravska O, Dijkhuis M, van 't Hoff V, Huijsmans M, van Dongen RCA, Voeteé LAB, Trilsbeek P, McQueen JM, Meyer AS. IDLaS-NL - A platform for running customized studies on individual differences in Dutch language skills via the Internet. Behav Res Methods 2024; 56:2422-2436. [PMID: 37749421 PMCID: PMC10991024 DOI: 10.3758/s13428-023-02156-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2023] [Indexed: 09/27/2023]
Abstract
We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the Internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV file output via e-mail. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundamental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl .
Collapse
Affiliation(s)
- Florian Hintz
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands.
- Deutscher Sprachatlas, Philipps University, Marburg, Germany.
| | - Olha Shkaravska
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - Marjolijn Dijkhuis
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - Vera van 't Hoff
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - Milou Huijsmans
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - Robert C A van Dongen
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - Levi A B Voeteé
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - Paul Trilsbeek
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
| | - James M McQueen
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
- Radboud University, Nijmegen, The Netherlands
| | - Antje S Meyer
- Max Planck Institute for Psycholinguistics, P.O. Box 310, Nijmegen, 6500, AH, The Netherlands
- Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Petersen IT, Apfelbaum KS, McMurray B. Adapting Open Science and Pre-registration to Longitudinal Research. INFANT AND CHILD DEVELOPMENT 2024; 33:e2315. [PMID: 38425545 PMCID: PMC10904029 DOI: 10.1002/icd.2315] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/16/2022] [Indexed: 03/02/2024]
Abstract
Open science practices, such as pre-registration and data sharing, increase transparency and may improve the replicability of developmental science. However, developmental science has lagged behind other fields in implementing open science practices. This lag may arise from unique challenges and considerations of longitudinal research. In this paper, preliminary guidelines are provided for adapting open science practices to longitudinal research to facilitate researchers' use of these practices. The guidelines propose a serial and modular approach to registration that includes an initial pre-registration of the methods and focal hypotheses of the longitudinal study, along with subsequent pre- or co-registered questions, hypotheses, and analysis plans associated with specific papers. Researchers are encouraged to share their research materials and relevant data with associated papers, and to report sufficient information for replicability. In addition, there should be careful consideration about requirements regarding the timing of data sharing, to avoid disincentivizing longitudinal research.
Collapse
Affiliation(s)
- Isaac T Petersen
- Department of Psychological and Brain Sciences, University of Iowa
| | | | - Bob McMurray
- Department of Psychological and Brain Sciences, Department of Communication Sciences and Disorders and Department of Linguistics, University of Iowa
| |
Collapse
|
8
|
Colby SE, McMurray B. Efficiency of spoken word recognition slows across the adult lifespan. Cognition 2023; 240:105588. [PMID: 37586157 PMCID: PMC10530619 DOI: 10.1016/j.cognition.2023.105588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/18/2023]
Abstract
Spoken word recognition is a critical hub during language processing, linking hearing and perception to meaning and syntax. Words must be recognized quickly and efficiently as speech unfolds to be successfully integrated into conversation. This makes word recognition a computationally challenging process even for young, normal hearing adults. Older adults often experience declines in hearing and cognition, which could be linked by age-related declines in the cognitive processes specific to word recognition. However, it is unclear whether changes in word recognition across the lifespan can be accounted for by hearing or domain-general cognition. Participants (N = 107) responded to spoken words in a Visual World Paradigm task while their eyes were tracked to assess the real-time dynamics of word recognition. We examined several indices of word recognition from early adolescence through older adulthood (ages 11-78). The timing and proportion of eye fixations to target and competitor images reveals that spoken word recognition became more efficient through age 25 and began to slow in middle age, accompanied by declines in the ability to resolve competition (e.g., suppressing sandwich to recognize sandal). There was a unique effect of age even after accounting for differences in inhibitory control, processing speed, and hearing thresholds. This suggests a limited age range where listeners are peak performers.
Collapse
Affiliation(s)
- Sarah E Colby
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, Psychological and Brain Sciences Building, Iowa City, IA, 52242, USA; Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Wendell Johnson Speech and Hearing Center, Iowa City, IA, 52242, USA; Department of Linguistics, University of Iowa, Phillips Hall, Iowa City, IA 52242, USA
| |
Collapse
|
9
|
Baron LS, Gul A, Arbel Y. With or without Feedback?-How the Presence of Feedback Affects Processing in Children with Developmental Language Disorder. Brain Sci 2023; 13:1263. [PMID: 37759863 PMCID: PMC10526478 DOI: 10.3390/brainsci13091263] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/18/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Language acquisition depends on the ability to process and learn probabilistic information, often through the integration of performance feedback. Children with developmental language disorder (DLD) have demonstrated weaknesses in both probabilistic learning and feedback processing, but the individual effects of each skill are poorly understood in this population. This study examined school-aged children with DLD (n = 29) and age- and gender-matched children with typical development (TD; n = 44) on a visual probabilistic classification learning task presented with and without feedback. In the feedback-based version of the task, children received performance feedback on a trial-by-trial basis during the training phase of the task. In the feedback-free version, children responded after seeing the correct choice marked with a green border and were not presented with feedback. Children with TD achieved higher accuracy than children with DLD following feedback-based training, while the two groups achieved similar levels of accuracy following feedback-free training. Analyses of event-related potentials (ERPs) provided insight into stimulus encoding processes. The feedback-free task was dominated by a frontal slow wave (FSW) and a late parietal component (LPC) which were not different between the two groups. The feedback-based task was dominated by a parietal slow wave (PSW) and an LPC, both of which were found to be larger in the TD than in the DLD group. In combination, results suggest that engagement with feedback boosts learning in children with TD, but not in children with DLD. When the need to process feedback is eliminated, children with DLD demonstrate behavioral and neurophysiological responses similar to their peers with TD.
Collapse
Affiliation(s)
- Lauren S. Baron
- MGH Institute of Health Professions, Boston, MA 02129, USA; (A.G.); (Y.A.)
| | | | | |
Collapse
|
10
|
Mahr TJ, Hustad KC. Lexical Predictors of Intelligibility in Young Children's Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3013-3025. [PMID: 36626389 PMCID: PMC10555465 DOI: 10.1044/2022_jslhr-22-00294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 08/19/2022] [Accepted: 09/19/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Speech perception is a probabilistic process, integrating bottom-up and top-down sources of information, and the frequency and phonological neighborhood of a word can predict how well it is perceived. In addition to asking how intelligible speakers are, it is important to ask how intelligible individual words are. We examined whether lexical features of words influenced intelligibility in young children. In particular, we applied the neighborhood activation model, which posits that a word's frequency and the overall frequency of a word's phonological competitors jointly affect the intelligibility of a word. METHOD We measured the intelligibility of 165 children between 30 and 47 months in age on 38 different single words. We performed an item response analysis using generalized mixed-effects logistic regression, adding word-level characteristics (target frequency, neighborhood competition, motor complexity, and phonotactic probability) as predictors of intelligibility. RESULTS There was considerable variation among the words and the children, but between-word variability was larger in magnitude than between-child variability. There was a clear positive effect of target word frequency and a negative effect of neighborhood competition. We did not find a clear negative effect of motor complexity, and phonotactic probability did not have any effect on intelligibility. CONCLUSION Word frequency and neighborhood competition both had an effect on intelligibility in young children's speech, so listener expectations are an important factor in the selection of items for children's intelligibility assessment.
Collapse
Affiliation(s)
| | - Katherine C. Hustad
- Waisman Center, University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison
| |
Collapse
|
11
|
McMurray B, Baxelbaum KS, Colby S, Tomblin JB. Understanding language processing in variable populations on their own terms: Towards a functionalist psycholinguistics of individual differences, development, and disorders. APPLIED PSYCHOLINGUISTICS 2023; 44:565-592. [PMID: 39072293 PMCID: PMC11280349 DOI: 10.1017/s0142716423000255] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Classic psycholinguistics seeks universal language mechanisms for all people, emphasing the "modal" listener: hearing, neurotypical, monolingual, young adults. Applied psycholinguistics then characterizes differences in terms of their deviation from modal. This mirrors naturalist philosophies of health which presume a normal function, with illness as a deviation. In contrast, normative positions argue that illness is partially culturally derived. It occurs when a person cannot meet socio-culturally defined goals, separating differences in biology (disease) from socio-cultural function (illness). We synthesize this with mechanistic functionalist views in which language emerges from diverse lower level mechanisms with no one-to-one mapping to function (termed the functional mechanistic normative approach). This challenges primarily psychometric approaches-which are culturally defined-suggesting a process-based approach may yield more insight. We illustrate this with work on word recognition across multiple domains: cochlear implant users, children, language disorders, L2 learners, and aging. This work investigates each group's solutions to the problem of word recognition as interesting in its own right. Variation in process is value-neutral, and psychometric measures complement this, reflecting fit with cultural expectations (disease vs. illness). By examining variation in processing across people with a variety of skills and goals, we arrive at deeper insight into fundamental principles.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological and Brain Sciences, Dept. of Communication Sciences and Disorders, Dept. of Linguistics and Dept. of Otolaryngology, University of Iowa
| | | | - Sarah Colby
- Dept. of Psychological and Brain Sciences, Dept. of Otolaryngology, University of Iowa
| | - J Bruce Tomblin
- Dept. of Communication Sciences and Disorders, University of Iowa
| |
Collapse
|
12
|
Harmon Z, Barak L, Shafto P, Edwards J, Feldman NH. The competition-compensation account of developmental language disorder. Dev Sci 2023; 26:e13364. [PMID: 36546681 DOI: 10.1111/desc.13364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/30/2022] [Accepted: 12/20/2022] [Indexed: 12/24/2022]
Abstract
Children with developmental language disorder (DLD) regularly use the bare form of verbs (e.g., dance) instead of inflected forms (e.g., danced). We propose an account of this behavior in which processing difficulties of children with DLD disproportionally affect processing novel inflected verbs in their input. Limited experience with inflection in novel contexts leads the inflection to face stronger competition from alternatives. Competition is resolved through a compensatory behavior that involves producing a more accessible alternative: in English, the bare form. We formalize this hypothesis within a probabilistic model that trades off context-dependent versus independent processing. Results show an over-reliance on preceding stem contexts when retrieving the inflection in a model that has difficulty with processing novel inflected forms. We further show that following the introduction of a bias to store and retrieve forms with preceding contexts, generalization in the typically developing (TD) models remains more or less stable, while the same bias in the DLD models exaggerates difficulties with generalization. Together, the results suggest that inconsistent use of inflectional morphemes by children with DLD could stem from inferences they make on the basis of data containing fewer novel inflected forms. Our account extends these findings to suggest that problems with detecting a form in novel contexts combined with a bias to rely on familiar contexts when retrieving a form could explain sequential planning difficulties in children with DLD. RESEARCH HIGHLIGHTS: Generalization difficulties with inflectional morphemes in children with Developmental Language Disorder arise from these children's limited experience with novel inflected forms. Limited experience with a form in novel contexts could lead to a storage bias where retrieving a form often requires relying on familiar preceding stems. While generalization in typically developing models remains stable across a range of model parameters, certain parameter values in the impaired models exaggerate difficulties with generalization. Children with DLD compensate for these retrieval difficulties through accessibility-driven language production: they produce the most accessible form among the alternatives.
Collapse
Affiliation(s)
- Zara Harmon
- University of Maryland Institute for Advanced Computer Studies (UMIACS), College Park, Maryland, USA
- Department of Linguistics, University of Maryland, College Park, Maryland, USA
| | - Libby Barak
- Department of Mathematics and Computer Science, Rutgers University, New Brunswick, New Jersey, USA
| | - Patrick Shafto
- Department of Mathematics and Computer Science, Rutgers University, New Brunswick, New Jersey, USA
- School of Mathematics, Institute for Advanced Study, Princeton, New Jersey, USA
| | - Jan Edwards
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA
| | - Naomi H Feldman
- University of Maryland Institute for Advanced Computer Studies (UMIACS), College Park, Maryland, USA
- Department of Linguistics, University of Maryland, College Park, Maryland, USA
| |
Collapse
|
13
|
Klein KE, Walker EA, McMurray B. Delayed Lexical Access and Cascading Effects on Spreading Semantic Activation During Spoken Word Recognition in Children With Hearing Aids and Cochlear Implants: Evidence From Eye-Tracking. Ear Hear 2023; 44:338-357. [PMID: 36253909 PMCID: PMC9957808 DOI: 10.1097/aud.0000000000001286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
Abstract
OBJECTIVE The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and spreading semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access. DESIGN Participants were children ages 9- to 12-years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children's fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical access and spreading semantic activation. RESULTS Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort competitor, and increased fixations to the rhyme competitor, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, although this delay was attributable to their delay in activating words in general, not to a distinct semantic source. CONCLUSIONS Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical access directly led to delays in spreading semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children's ability to understand connected speech in everyday life.
Collapse
Affiliation(s)
- Kelsey E Klein
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Elizabeth A Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Bob McMurray
- Department of Psychological and Brain Sciences, Department of Communication Sciences and Disorders, and Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
14
|
McMurray B. I'm not sure that curve means what you think it means: Toward a [more] realistic understanding of the role of eye-movement generation in the Visual World Paradigm. Psychon Bull Rev 2023; 30:102-146. [PMID: 35962241 PMCID: PMC10964151 DOI: 10.3758/s13423-022-02143-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/29/2022] [Indexed: 11/08/2022]
Abstract
The Visual World Paradigm (VWP) is a powerful experimental paradigm for language research. Listeners respond to speech in a "visual world" containing potential referents of the speech. Fixations to these referents provides insight into the preliminary states of language processing as decisions unfold. The VWP has become the dominant paradigm in psycholinguistics and extended to every level of language, development, and disorders. Part of its impact is the impressive data visualizations which reveal the millisecond-by-millisecond time course of processing, and advances have been made in developing new analyses that precisely characterize this time course. All theoretical and statistical approaches make the tacit assumption that the time course of fixations is closely related to the underlying activation in the system. However, given the serial nature of fixations and their long refractory period, it is unclear how closely the observed dynamics of the fixation curves are actually coupled to the underlying dynamics of activation. I investigated this assumption with a series of simulations. Each simulation starts with a set of true underlying activation functions and generates simulated fixations using a simple stochastic sampling procedure that respects the sequential nature of fixations. I then analyzed the results to determine the conditions under which the observed fixations curves match the underlying functions, the reliability of the observed data, and the implications for Type I error and power. These simulations demonstrate that even under the simplest fixation-based models, observed fixation curves are systematically biased relative to the underlying activation functions, and they are substantially noisier, with important implications for reliability and power. I then present a potential generative model that may ultimately overcome many of these issues.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Psychological and Brain Sciences, 278 PBSB, University of Iowa, Iowa City, IA, 52242, USA.
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA.
- Department of Linguistics, University of Iowa, Iowa City, IA, USA.
- Department of Otolaryngology, University of Iowa, Iowa City, IA, USA.
| |
Collapse
|
15
|
Apfelbaum KS, Goodwin C, Blomquist C, McMurray B. The development of lexical competition in written- and spoken-word recognition. Q J Exp Psychol (Hove) 2023; 76:196-219. [PMID: 35296190 PMCID: PMC10962864 DOI: 10.1177/17470218221090483] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Efficient word recognition depends on the ability to overcome competition from overlapping words. The nature of the overlap depends on the input modality: spoken words have temporal overlap from other words that share phonemes in the same positions, whereas written words have spatial overlap from other words with letters in the same places. It is unclear how these differences in input format affect the ability to recognise a word and the types of competitors that become active while doing so. This study investigates word recognition in both modalities in children between 7 and 15. Children complete a visual-world paradigm eye-tracking task that measures competition from words with several types of overlap, using identical word lists between modalities. Results showed correlated developmental changes in the speed of target recognition in both modalities. In addition, developmental changes were seen in the efficiency of competitor suppression for some competitor types in the spoken modality. These data reveal some developmental continuity in the process of word recognition independent of modality but also some instances of independence in how competitors are activated. Stimuli, data, and analyses from this project are available at: https://osf.io/eav72.
Collapse
Affiliation(s)
- Keith S Apfelbaum
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Claire Goodwin
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Christina Blomquist
- Department of Communication Sciences and Disorders, University of Maryland, College Park, MD, USA
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
- Department of Communication Sciences and Disorders, Department of Linguistics, Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
16
|
Abstract
As a spoken word unfolds over time, similar sounding words (cap and cat) compete until one word "wins". Lexical competition becomes more efficient from infancy through adolescence. We examined one potential mechanism underlying this development: lexical inhibition, by which activated candidates suppress competitors. In Experiment 1, younger (7-8 years) and older (12-13 years) children heard words (cap) in which the onset was manipulated to briefly boost competition from a cohort competitor (cat). This was compared to a condition with a nonword (cack) onset that would not inhibit the target. Words were presented in a visual world task during which eye movements were recorded. Both groups showed less looking to the target when perceiving the competitor-splice relative to the nonword-splice, showing engagement of lexical inhibition. Exploratory analyses of linguistic adaptation across the experiment revealed that older children demonstrated consistent lexical inhibition across the experiment and younger children did not, initially showing no effect in the first half of trials and then a robust effect in the latter half. In Experiment 2, adults also displayed consistent lexical inhibition in the same task. These findings suggest that younger children do not consistently engage lexical inhibition in typical listening but can quickly bring it online in response to certain linguistic experiences. Computational modeling showed that age-related differences are best explained by increased engagement of inhibition rather than growth in activation. These findings suggest that continued development of lexical inhibition in later childhood may underlie increases in efficiency of spoken word recognition. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
17
|
van Rijn E, Gouws A, Walker SA, Knowland VCP, Cairney SA, Gaskell MG, Henderson LM. Do naps benefit novel word learning? Developmental differences and white matter correlates. Cortex 2023; 158:37-60. [PMID: 36434978 DOI: 10.1016/j.cortex.2022.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 07/04/2022] [Accepted: 09/26/2022] [Indexed: 11/07/2022]
Abstract
Memory representations of newly learned words undergo changes during nocturnal sleep, as evidenced by improvements in explicit recall and lexical integration (i.e., after sleep, novel words compete with existing words during online word recognition). Some studies have revealed larger sleep-benefits in children relative to adults. However, whether daytime naps play a similar facilitatory role is unclear. We investigated the effect of a daytime nap (relative to wake) on explicit memory (recall/recognition) and lexical integration (lexical competition) of newly learned novel words in young adults and children aged 10-12 years, also exploring white matter correlates of the pre- and post-nap effects of word learning in the child group with diffusion weighted MRI. In both age groups, a nap maintained explicit memory of novel words and wake led to forgetting. However, there was an age group interaction when comparing change in recall over the nap: children showed a slight improvement whereas adults showed a slight decline. There was no evidence of lexical integration at any point. Although children spent proportionally more time in slow-wave sleep (SWS) than adults, neither SWS nor spindle parameters correlated with over-nap changes in word learning. For children, increased fractional anisotropy (FA) in the uncinate fasciculus and arcuate fasciculus were associated with the recognition of novel words immediately after learning, and FA in the right arcuate fasciculus was further associated with changes in recall of novel words over a nap, supporting the importance of these tracts in the word learning and consolidation process. These findings point to a protective role of naps in word learning (at least under the present conditions), and emphasize the need to better understand both the active and passive roles that sleep plays in supporting vocabulary consolidation over development.
Collapse
Affiliation(s)
- E van Rijn
- Department of Psychology, University of York, York, United Kingdom.
| | - A Gouws
- Department of Psychology, University of York, York, United Kingdom.
| | - S A Walker
- Department of Psychology, University of York, York, United Kingdom.
| | - V C P Knowland
- Department of Psychology, University of York, York, United Kingdom.
| | - S A Cairney
- Department of Psychology, University of York, York, United Kingdom.
| | - M G Gaskell
- Department of Psychology, University of York, York, United Kingdom.
| | - L M Henderson
- Department of Psychology, University of York, York, United Kingdom.
| |
Collapse
|
18
|
Kim J, Meyer L, Hendrickson K. The Role of Orthography and Phonology in Written Word Recognition: Evidence From Eye-Tracking in the Visual World Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4812-4820. [PMID: 36306510 DOI: 10.1044/2022_jslhr-22-00231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE There is a long-standing debate about how written words are recognized. Central to this debate is the role of phonology. The objective of this study is to contribute to our collective understanding regarding the role of phonology in written word recognition. METHOD A total of 30 monolingual adults were tested using a novel written word version of the visual world paradigm (VWP). We compared activation of phonological anadromes (words that are matched for sounds but not letters, e.g., JAB-BADGE) and orthographic anadromes (words that are matched for letters but not sounds, e.g., LEG-GEL) to determine the relative role of phonology and orthography in familiar single-word reading. RESULTS We found that activation for phonological anadromes is earlier, more robust, and sustained longer than orthographic anadromes. CONCLUSIONS These results are most consistent with strong phonological theories of single-word reading that posit an early and robust role of phonology. This study has broad implications for larger debates regarding reading instruction.
Collapse
Affiliation(s)
- Jina Kim
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Lindsey Meyer
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kristi Hendrickson
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City
| |
Collapse
|
19
|
Kutlu E, Chiu S, McMurray B. Moving away from deficiency models: Gradiency in bilingual speech categorization. Front Psychol 2022; 13:1033825. [PMID: 36507048 PMCID: PMC9730410 DOI: 10.3389/fpsyg.2022.1033825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/03/2022] [Indexed: 11/25/2022] Open
Abstract
For much of its history, categorical perception was treated as a foundational theory of speech perception, which suggested that quasi-discrete categorization was a goal of speech perception. This had a profound impact on bilingualism research which adopted similar tasks to use as measures of nativeness or native-like processing, implicitly assuming that any deviation from discreteness was a deficit. This is particularly problematic for listeners like heritage speakers whose language proficiency, both in their heritage language and their majority language, is questioned. However, we now know that in the monolingual listener, speech perception is gradient and listeners use this gradiency to adjust subphonetic details, recover from ambiguity, and aid learning and adaptation. This calls for new theoretical and methodological approaches to bilingualism. We present the Visual Analogue Scaling task which avoids the discrete and binary assumptions of categorical perception and can capture gradiency more precisely than other measures. Our goal is to provide bilingualism researchers new conceptual and empirical tools that can help examine speech categorization in different bilingual communities without the necessity of forcing their speech categorization into discrete units and without assuming a deficit model.
Collapse
Affiliation(s)
- Ethan Kutlu
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, United States
- Department of Linguistics, University of Iowa, Iowa City, IA, United States
| | - Samantha Chiu
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, United States
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, United States
- Department of Linguistics, University of Iowa, Iowa City, IA, United States
| |
Collapse
|
20
|
McMurray B, Sarrett ME, Chiu S, Black AK, Wang A, Canale R, Aslin RN. Decoding the temporal dynamics of spoken word and nonword processing from EEG. Neuroimage 2022; 260:119457. [PMID: 35842096 PMCID: PMC10875705 DOI: 10.1016/j.neuroimage.2022.119457] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 07/02/2022] [Accepted: 07/06/2022] [Indexed: 11/23/2022] Open
Abstract
The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec-by-msec basis. This is crucial for understanding the neuroscience of language, and for understanding hearing, language and cognitive disorders in people for whom current behavioral methods are not suitable. We applied machine-learning techniques to standard EEG signals to decode which word was heard on each trial and analyzed the patterns of confusion over time. Results mirrored psycholinguistic findings: Early on, the decoder was equally likely to report the target (e.g., baggage) or a similar sounding competitor (badger), but by around 500 msec, competitors were suppressed. Follow up analyses show that this is robust across EEG systems (gel and saline), with fewer channels, and with fewer trials. Results are robust within individuals and show high reliability. This suggests a powerful and simple paradigm that can assess the neural dynamics of speech decoding, with potential applications for understanding lexical development in a variety of clinical disorders.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological and Brain Sciences, Dept. of Communication Sciences and Disorders, Dept. of Linguistics and Dept. of Otolaryngology, University of Iowa.
| | - McCall E Sarrett
- Interdisciplinary Graduate Program in Neuroscience, Unviersity of Iowa
| | - Samantha Chiu
- Dept. of Psychological and Brain Sciences, University of Iowa
| | - Alexis K Black
- School of Audiology and Speech Sciences, University of British Columbia, Haskins Laboratories
| | - Alice Wang
- Dept. of Psychology, University of Oregon, Haskins Laboratories
| | - Rebecca Canale
- Dept. of Psychological Sciences, University of Connecticut, Haskins Laboratories
| | - Richard N Aslin
- Haskins Laboratories, Department of Psychology and Child Study Center, Yale University, Department of Psychology, University of Connecticut
| |
Collapse
|
21
|
Krishnan S, Cler GJ, Smith HJ, Willis HE, Asaridou SS, Healy MP, Papp D, Watkins KE. Quantitative MRI reveals differences in striatal myelin in children with DLD. eLife 2022; 11:e74242. [PMID: 36164824 PMCID: PMC9514847 DOI: 10.7554/elife.74242] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 07/21/2022] [Indexed: 12/25/2022] Open
Abstract
Developmental language disorder (DLD) is a common neurodevelopmental disorder characterised by receptive or expressive language difficulties or both. While theoretical frameworks and empirical studies support the idea that there may be neural correlates of DLD in frontostriatal loops, findings are inconsistent across studies. Here, we use a novel semiquantitative imaging protocol - multi-parameter mapping (MPM) - to investigate microstructural neural differences in children with DLD. The MPM protocol allows us to reproducibly map specific indices of tissue microstructure. In 56 typically developing children and 33 children with DLD, we derived maps of (1) longitudinal relaxation rate R1 (1/T1), (2) transverse relaxation rate R2* (1/T2*), and (3) Magnetization Transfer saturation (MTsat). R1 and MTsat predominantly index myelin, while R2* is sensitive to iron content. Children with DLD showed reductions in MTsat values in the caudate nucleus bilaterally, as well as in the left ventral sensorimotor cortex and Heschl's gyrus. They also had globally lower R1 values. No group differences were noted in R2* maps. Differences in MTsat and R1 were coincident in the caudate nucleus bilaterally. These findings support our hypothesis of corticostriatal abnormalities in DLD and indicate abnormal levels of myelin in the dorsal striatum in children with DLD.
Collapse
Affiliation(s)
- Saloni Krishnan
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Psychology, Royal Holloway, University of London, Egham HillLondonUnited Kingdom
| | - Gabriel J Cler
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Speech and Hearing Sciences, University of WashingtonSeattleUnited States
| | - Harriet J Smith
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- MRC Cognition and Brain Sciences Unit, University of CambridgeCambridgeUnited Kingdom
| | - Hanna E Willis
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Nuffield Department of Clinical Neurosciences, John Radcliffe HospitalOxfordUnited Kingdom
| | - Salomi S Asaridou
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| | - Máiréad P Healy
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Psychology, University of CambridgeCambridgeUnited Kingdom
| | - Daniel Papp
- NeuroPoly Lab, Biomedical Engineering Department, Polytechnique MontrealMontrealCanada
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of OxfordOxfordUnited Kingdom
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Dept of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
22
|
Smith FX, McMurray B. Lexical Access Changes Based on Listener Needs: Real-Time Word Recognition in Continuous Speech in Cochlear Implant Users. Ear Hear 2022; 43:1487-1501. [PMID: 35067570 PMCID: PMC9300769 DOI: 10.1097/aud.0000000000001203] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
OBJECTIVES A key challenge in word recognition is the temporary ambiguity created by the fact that speech unfolds over time. In normal hearing (NH) listeners, this temporary ambiguity is resolved through incremental processing and competition among lexical candidates. Post-lingually deafened cochlear implant (CI) users show similar incremental processing and competition but with slight delays. However, even brief delays could lead to drastic changes when compounded across multiple words in a phrase. This study asks whether words presented in non-informative continuous speech (a carrier phrase) are processed differently than in isolation and whether NH listeners and CI users exhibit different effects of a carrier phrase. DESIGN In a Visual World Paradigm experiment, listeners heard words either in isolation or in non-informative carrier phrases (e.g., "click on the…" ). Listeners selected the picture corresponding to the target word from among four items including the target word (e.g., mustard ), a cohort competitor (e.g., mustache ), a rhyme competitor (e.g., custard ), and an unrelated item (e.g., penguin ). Eye movements were tracked as an index of the relative activation of each lexical candidate as competition unfolds over the course of word recognition. Participants included 21 post-lingually deafened cochlear implant users and 21 NH controls. A replication experiment presented in the Supplemental Digital Content, http://links.lww.com/EANDH/A999 included an additional 22 post-lingually deafened CI users and 18 NH controls. RESULTS Both CI users and the NH controls were accurate at recognizing the words both in continuous speech and in isolation. The time course of lexical activation (indexed by the fixations) differed substantially between groups. CI users were delayed in fixating the target relative to NH controls. Additionally, CI users showed less competition from cohorts than NH controls (even as previous studies have often report increased competition). However, CI users took longer to suppress the cohort and suppressed it less fully than the NH controls. For both CI users and NH controls, embedding words in carrier phrases led to more immediacy in lexical access as observed by increases in cohort competition relative to when words were presented in isolation. However, CI users were not differentially affected by the carriers. CONCLUSIONS Unlike prior work, CI users appeared to exhibit "wait-and-see" profile, in which lexical access is delayed minimizing early competition. However, CI users simultaneously sustained competitor activation late in the trial, possibly to preserve flexibility. This hybrid profile has not been observed previously. When target words are heard in continuous speech, both CI users and NH controls more heavily weight early information. However, CI users (but not NH listeners) also commit less fully to the target, potentially keeping options open if they need to recover from a misperception. This mix of patterns reflects a lexical system that is extremely flexible and adapts to fit the needs of a listener.
Collapse
Affiliation(s)
| | - Bob McMurray
- Dept. of Psychological and Brain Sciences, University of Iowa
- Dept. of Otolaryngology, University of Iowa
| |
Collapse
|
23
|
McMurray B, Apfelbaum KS, Tomblin JB. The Slow Development of Real-Time Processing: Spoken-Word Recognition as a Crucible for New Thinking About Language Acquisition and Language Disorders. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022; 31:305-315. [PMID: 37663784 PMCID: PMC10473872 DOI: 10.1177/09637214221078325] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Words are fundamental to language, linking sound, articulation, and spelling to meaning and syntax; and lexical deficits are core to communicative disorders. Work in language acquisition commonly focuses on how lexical knowledge-knowledge of words' sound patterns and meanings-is acquired. But lexical knowledge is insufficient to account for skilled language use. Sophisticated real-time processes must decode the sound pattern of words and interpret them appropriately. We review work that bridges this gap by using sensitive real-time measures (eye tracking in the visual world paradigm) of school-age children's processing of highly familiar words. This work reveals that the development of word recognition skills can be characterized by changes in the rate at which decisions unfold in the lexical system (the activation rate). Moreover, contrary to the standard view that these real-time skills largely develop during infancy and toddlerhood, they develop slowly, at least through adolescence. In contrast, language disorders can be linked to differences in the ultimate degree to which competing interpretations are suppressed (competition resolution), and these differences can be mechanistically linked to deficits in inhibition. These findings have implications for real-world problems such as reading difficulties and second-language acquisition. They suggest that developing accurate, flexible, and efficient processing is just as important a developmental goal as is acquiring language knowledge.
Collapse
Affiliation(s)
- Bob McMurray
- Department of Psychological and Brain Sciences
- Department of Communication Sciences and Disorders
- Department of Linguistics
- DeLTA Center, University of Iowa
| | - Keith S. Apfelbaum
- Department of Psychological and Brain Sciences
- DeLTA Center, University of Iowa
| | - J. Bruce Tomblin
- Department of Communication Sciences and Disorders
- DeLTA Center, University of Iowa
| |
Collapse
|
24
|
Lescht E, Venker C, McHaney JR, Bohland JW, Wray AH. Novel word recognition in childhood stuttering. TOPICS IN LANGUAGE DISORDERS 2022; 42:41-56. [PMID: 35295185 PMCID: PMC8920118 DOI: 10.1097/tld.0000000000000271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Language skills have long been posited to be a factor contributing to developmental stuttering. The current study aimed to evaluate whether novel word recognition, a critical skill for language development, differentiated children who stutter from children who do not stutter. Twenty children who stutter and 18 children who do not stutter, aged 3–8 years, completed a novel word recognition task. Real-time eye gaze was used to evaluate online learning. Retention was measured immediately and after a 1-hr delay. Children who stutter and children who do not stutter exhibited similar patterns of online novel word recognition. Both groups also had comparable retention accuracy. Together, these results revealed that novel word recognition and retention were similar in children who stutter and children who do not stutter. These patterns suggest that differences observed in previous studies of language in stuttering may not be driven by novel word recognition abilities in children who stutter.
Collapse
Affiliation(s)
- Erica Lescht
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Courtney Venker
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan
| | - Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jason W. Bohland
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Amanda Hampton Wray
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
25
|
Apfelbaum KS, Klein-Packard J, McMurray B. The pictures who shall not be named: Empirical support for benefits of preview in the Visual World Paradigm. JOURNAL OF MEMORY AND LANGUAGE 2021; 121:104279. [PMID: 34326570 PMCID: PMC8315347 DOI: 10.1016/j.jml.2021.104279] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
A common critique of the Visual World Paradigm (VWP) in psycholinguistic studies is that what is designed as a measure of language processes is meaningfully altered by the visual context of the task. This is crucial, particularly in studies of spoken word recognition, where the displayed images are usually seen as just a part of the measure and are not of fundamental interest. Many variants of the VWP allow participants to sample the visual scene before a trial begins. However, this could bias their interpretations of the later speech or even lead to abnormal processing strategies (e.g., comparing the input to only preactivated working memory representations). Prior work has focused only on whether preview duration changes fixation patterns. However, preview could affect a number of processes, such as visual search, that would not challenge the interpretation of the VWP. The present study uses a series of targeted manipulations of the preview period to ask if preview alters looking behavior during a trial, and why. Results show that evidence of incremental processing and phonological competition seen in the VWP are not dependent on preview, and are not enhanced by manipulations that directly encourage phonological prenaming. Moreover, some forms of preview can eliminate nuisance variance deriving from object recognition and visual search demands in order to produce a more sensitive measure of linguistic processing. These results deepen our understanding of how the visual scene interacts with language processing to drive fixations patterns in the VWP, and reinforce the value of the VWP as a tool for measuring real-time language processing. Stimuli, data and analysis scripts are available at https://osf.io/b7q65/.
Collapse
Affiliation(s)
| | | | - Bob McMurray
- Dept. of Psychological and Brain Sciences University of Iowa
- Dept. of Communication Sciences and Disorders, Dept. of Linguistics, Dept. of Otolaryngology, University of Iowa
| |
Collapse
|
26
|
Hendrickson K, Apfelbaum K, Goodwin C, Blomquist C, Klein K, McMurray B. The profile of real-time competition in spoken and written word recognition: More similar than different. Q J Exp Psychol (Hove) 2021; 75:1653-1673. [PMID: 34666573 DOI: 10.1177/17470218211056842] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Word recognition occurs across two sensory modalities: auditory (spoken words) and visual (written words). While each faces different challenges, they are often described in similar terms as a competition process by which multiple lexical candidates are activated and compete for recognition. While there is a general consensus regarding the types of words that compete during spoken word recognition, there is less consensus for written word recognition. The present study develops a novel version of the Visual World Paradigm (VWP) to examine written word recognition and uses this to assess the nature of the competitor set during word recognition in both modalities using the same experimental design. For both spoken and written words, we found evidence for activation of onset competitors (cohorts, e.g., cat, cap) and words that contain the same phonemes or letters in reverse order (anadromes, e.g., cat, tack). We found no evidence of activation for rhymes (e.g., cat, hat). The results across modalities were quite similar, with the exception that for spoken words, cohorts were more active than anadromes, whereas for written words activation was similar. These results suggest a common characterisation of lexical similarity across spoken and written words: temporal or spatial order is coarsely coded, and onsets may receive more weight in both systems. However, for spoken words, temporary ambiguity during the moment of processing gives cohorts an additional boost during real-time recognition.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Keith Apfelbaum
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Claire Goodwin
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA.,University of Iowa Health Network Rehabilitation Hospital, Coralville, IA, USA
| | - Christina Blomquist
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - Kelsey Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Bob McMurray
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA.,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA.,Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
27
|
Abstract
OBJECTIVES Whispered speech offers a unique set of challenges to speech perception and word recognition. The goals of the present study were twofold: First, to determine how listeners recognize whispered speech. Second, to inform major theories of spoken word recognition by considering how recognition changes when major cues to phoneme identity are reduced or largely absent compared with normal voiced speech. DESIGN Using eye tracking in the Visual World Paradigm, we examined how listeners recognize whispered speech. After hearing a target word (normal or whispered), participants selected the corresponding image from a display of four-a target (e.g., money), a word that shares sounds with the target at the beginning (cohort competitor, e.g., mother), a word that shares sounds with the target at the end (rhyme competitor, e.g., honey), and a phonologically unrelated word (e.g., whistle). Eye movements to each object were monitored to measure (1) how fast listeners process whispered speech, and (2) how strongly they consider lexical competitors (cohorts and rhymes) as the speech signal unfolds. RESULTS Listeners were slower to recognize whispered words. Compared with normal speech, listeners displayed slower reaction times to click the target image, were slower to fixate the target, and fixated the target less overall. Further, we found clear evidence that the dynamics of lexical competition are altered during whispered speech recognition. Relative to normal speech, words that overlapped with the target at the beginning (cohorts) displayed slower, reduced, and delayed activation, whereas words that overlapped with the target at the end (rhymes) exhibited faster, more robust, and longer lasting activation. CONCLUSION When listeners are confronted with whispered speech, they engage in a "wait-and-see" approach. Listeners delay lexical access, and by the time they begin to consider what word they are hearing, the beginning of the word has largely come and gone, and activation for cohorts is reduced. However, delays in lexical access actually increase consideration of rhyme competitors; the delay pushes lexical activation to a point later in processing, and the recognition system puts more weight on the word-final overlap between the target and the rhyme.
Collapse
|
28
|
Kapnoula EC. On the Locus of L2 Lexical Fuzziness: Insights From L1 Spoken Word Recognition and Novel Word Learning. Front Psychol 2021; 12:689052. [PMID: 34305748 PMCID: PMC8295481 DOI: 10.3389/fpsyg.2021.689052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 06/15/2021] [Indexed: 11/13/2022] Open
Abstract
The examination of how words are learned can offer valuable insights into the nature of lexical representations. For example, a common assessment of novel word learning is based on its ability to interfere with other words; given that words are known to compete with each other (Luce and Pisoni, 1998; Dahan et al., 2001), we can use the capacity of a novel word to interfere with the activation of other lexical representations as a measure of the degree to which it is integrated into the mental lexicon (Leach and Samuel, 2007). This measure allows us to assess novel word learning in L1 or L2, but also the degree to which representations from the two lexica interact with each other (Marian and Spivey, 2003). Despite the somewhat independent lines of research on L1 and L2 word learning, common patterns emerge across the two literatures (Lindsay and Gaskell, 2010; Palma and Titone, 2020). In both cases, lexicalization appears to follow a similar trajectory. In L1, newly encoded words often fail at first to engage in competition with known words, but they do so later, after they have been better integrated into the mental lexicon (Gaskell and Dumay, 2003; Dumay and Gaskell, 2012; Bakker et al., 2014). Similarly, L2 words generally have a facilitatory effect, which can, however, become inhibitory in the case of more robust (high-frequency) lexical representations. Despite the similar pattern, L1 lexicalization is described in terms of inter-lexical connections (Leach and Samuel, 2007), leading to more automatic processing (McMurray et al., 2016); whereas in L2 word learning, lack of lexical inhibition is attributed to less robust (i.e., fuzzy) L2 lexical representations. Here, I point to these similarities and I use them to argue that a common mechanism may underlie similar patterns across the two literatures.
Collapse
|
29
|
Park J, Miller CA, Sanjeevan T, Van Hell JG, Weiss DJ, Mainela-Arnold E. Non-linguistic cognitive measures as predictors of functionally defined developmental language disorder in monolingual and bilingual children. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2021; 56:858-872. [PMID: 34137124 DOI: 10.1111/1460-6984.12632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 04/01/2021] [Accepted: 04/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND & AIMS Given that standardized language measures alone are inadequate for identifying functionally defined developmental language disorder (fDLD), this study investigated whether non-linguistic cognitive abilities (procedural learning, motor functions, executive attention, processing speed) can increase the prediction accuracy of fDLD in children in linguistically diverse settings. METHODS & PROCEDURES We examined non-linguistic cognitive abilities in mono- and bilingual school-aged children (ages 8-12) with and without fDLD. Typically developing (TD) children (14 monolinguals, 12 bilinguals) and children with fDLD (28 monolinguals, 12 bilinguals) completed tasks measuring motor functions, procedural learning, executive attention and processing speed. Children were assigned as fDLD based on parental or professional concerns regarding children's daily language functioning. If no concerns were present, children were assigned as TD. Standardized English scores, non-verbal IQ scores and years of maternal education were also obtained. Likelihood ratios were used to examine how well each measure separated the fDLD versus TD groups. A binary logistic regression was used to test whether combined measures enhanced the prediction of identifying fDLD status. OUTCOMES & RESULTS A combination of linguistic and non-linguistic measures provided the best distinction between fDLD and TD for both mono- and bilingual groups. For monolingual children, the combined measures include English language scores, functional motor abilities and processing speed, whereas for bilinguals, the combined measures include English language scores and procedural learning. CONCLUSIONS & IMPLICATIONS A combination of non-linguistic and linguistic measures significantly improved the distinction between fDLD and TD for both mono- and bilingual groups. This study supports the possibility of using non-linguistic cognitive measures to identify fDLD in linguistically diverse settings. WHAT THIS PAPER ADDS What is already known on the subject Given that standardized English language measures may fail to identify functional language disorder, we examined whether supplementing English language measures with non-linguistic cognitive tasks could resolve the problem. Our study is based on the hypothesis that non-linguistic cognitive abilities contribute to language processing and learning. This is further supported by previous findings that children with language disorder exhibit non-linguistic cognitive deficits. What this paper adds to existing knowledge The results indicated that a combination of linguistic and non-linguistic cognitive abilities increased the prediction of functional language disorder in both mono- and bilingual children. What are the potential or actual clinical implications of this work? This study supports the possibility of using non-linguistic cognitive measures to identify the risk of language disorder in linguistically diverse settings.
Collapse
Affiliation(s)
- Jisook Park
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | - Carol A Miller
- Department of Communication Sciences and Disorders, Pennsylvania State University, University Park, PA, USA
| | - Teenu Sanjeevan
- Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Janet G Van Hell
- Department of Psychology, Pennsylvania State University, University Park, PA, USA
| | - Daniel J Weiss
- Department of Psychology, Pennsylvania State University, University Park, PA, USA
| | - Elina Mainela-Arnold
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Department of Psychology and Speech and Language Pathology, University of Turku, Turku, Finland
| |
Collapse
|
30
|
Nouraey P, Ayatollahi MA, Moghadas M. Late Language Emergence: A literature review. Sultan Qaboos Univ Med J 2021; 21:e182-e190. [PMID: 34221464 PMCID: PMC8219342 DOI: 10.18295/squmj.2021.21.02.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/09/2020] [Accepted: 09/16/2020] [Indexed: 11/16/2022] Open
Abstract
Infants usually say their first word at the age of 12 months; subsequently, within the next 6-12 months, they develop a vocabulary of approximately 50 words, along with the ability to make two-word combinations. However, late talkers (LTs) demonstrate delayed speech in the absence of hearing impairments, cognitive developmental issues or relevant birth history. The prevalence of late language emergence (LLE) in toddlers is reported to be 10-15%. Studies of LTs are both theoretically and clinically significant. Early diagnosis and clinical intervention may result in relatively stable speech capabilities by the early school years. The present article aimed to review both theoretical and empirical studies regarding LLE within the process of first language acquisition, as well as methods for the early diagnosis of delayed speech in children and the authors' own clinical and theoretical recommendations.
Collapse
Affiliation(s)
| | - Mohammad A Ayatollahi
- Department of English Language, Faculty of Foreign Languages, Islamic Azad University, Sepidan, Iran
| | - Marzieh Moghadas
- Department of Behavioral Medicine, College of Medicine and Health Sciences, Sultan Qaboos University, Muscat, Oman
| |
Collapse
|
31
|
van Alphen P, Brouwer S, Davids N, Dijkstra E, Fikkert P. Word Recognition and Word Prediction in Preschoolers With (a Suspicion of) a Developmental Language Disorder: Evidence From Eye Tracking. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2005-2021. [PMID: 34019773 DOI: 10.1044/2021_jslhr-20-00227] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose This study compares online word recognition and prediction in preschoolers with (a suspicion of) a developmental language disorder (DLD) and typically developing (TD) controls. Furthermore, it investigates correlations between these measures and the link between online and off-line language scores in the DLD group. Method Using the visual world paradigm, Dutch children ages 3;6 (years;months) with (a suspicion of) DLD (n = 51) and TD peers (n = 31) listened to utterances such as, "Kijk, een hoed!" (Look, a hat!) in a word recognition task, and sentences such as, "Hé, hij leest gewoon een boek" (literal translation: Hey, he reads just a book) in a word prediction task, while watching a target and distractor picture. Results Both groups demonstrated a significant word recognition effect that looked similar directly after target onset. However, the DLD group looked longer at the target than the TD group and shifted slower from the distractor to target pictures. Within the DLD group, word recognition was linked to off-line expressive language scores. For word prediction, the DLD group showed a smaller effect and slower shifts from verb onset compared to the TD group. Interestingly, within the DLD group, prediction behavior varied considerably, and was linked to receptive and expressive language scores. Finally, slower shifts in word recognition were related to smaller prediction effects. Conclusions While the groups' word recognition abilities looked similar, and only differed in processing speed and dwell time, the DLD group showed atypical verb-based prediction behavior. This may be due to limitations in their processing capacity and/or their linguistic knowledge, in particular of verb argument structure.
Collapse
Affiliation(s)
| | | | - Nina Davids
- Royal Dutch Kentalis, Sint-Michielsgestel, the Netherlands
| | - Emma Dijkstra
- Royal Dutch Kentalis, Sint-Michielsgestel, the Netherlands
| | | |
Collapse
|
32
|
Bice K, Kroll JF. Grammatical processing in two languages: How individual differences in language experience and cognitive abilities shape comprehension in heritage bilinguals. JOURNAL OF NEUROLINGUISTICS 2021; 58:100963. [PMID: 33390660 PMCID: PMC7774644 DOI: 10.1016/j.jneuroling.2020.100963] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Recent studies have demonstrated variation in language processing for monolingual and bilingual speakers alike, suggesting that only by considering individual differences will an accurate picture of the consequences of language experience be adequately understood. This approach can be illustrated in ERP research that has shown that sentence contexts that traditionally elicit a P600 component in response to a syntactic violation, elicit an N400 response for a subset of individuals. That result has been reported for monolingual speakers processing sentences in their L1 and also for bilinguals processing sentences in their L2. To date, no studies have compared variation in L1 and L2 ERP effects in the very same bilingual speakers. In the present paper, we do that by examining sentence processing in heritage bilinguals who acquired both languages from early childhood but for whom the L2 typically becomes the dominant language. Variation in ERPs produced by the non-dominant L1 and dominant L2 of heritage bilinguals was compared to variation found in monolingual L1 processing. The group-averaged results showed the smallest N400 and P600 responses in the native, but no longer dominant, L1 of heritage bilinguals, and largest in the monolinguals. Individual difference analyses linking ERP variation to working memory and language proficiency showed that working memory was the primary factor related to monolingual L1 processing, whereas bilinguals did not show this relationship. In contrast, proficiency was the primary factor related to ERP responses for no longer dominant L1 for bilinguals, but unrelated to monolingual L1 processing, whereas bilinguals' dominant L2 processing showed an intermediate relationship. Finally, the N400 was absent for bilinguals performing the task in the same language in which they initially learned to read, but significantly larger when bilinguals performed the task in the other language. The results support the idea that proficient bilinguals utilize the same underlying mechanisms to process both languages, although the factors that affect processing in each language may differ. More broadly, we find that bilingualism is an experience that opens the language system to perform fluidly under changing circumstances, such as increasing proficiency. In contrast, language processing in monolinguals was primarily related to relatively stable factors (working memory).
Collapse
Affiliation(s)
- Kinsey Bice
- Department of Psychology, Pennsylvania State University
- Department of Psychology, University of Washington
| | - Judith F. Kroll
- Department of Language Science, University of California, Irvine
| |
Collapse
|
33
|
Kaganovich N, Schumaker J, Christ S. Impaired Audiovisual Representation of Phonemes in Children with Developmental Language Disorder. Brain Sci 2021; 11:brainsci11040507. [PMID: 33923647 PMCID: PMC8073635 DOI: 10.3390/brainsci11040507] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 04/06/2021] [Accepted: 04/10/2021] [Indexed: 11/25/2022] Open
Abstract
We examined whether children with developmental language disorder (DLD) differed from their peers with typical development (TD) in the degree to which they encode information about a talker’s mouth shape into long-term phonemic representations. Children watched a talker’s face and listened to rare changes from [i] to [u] or the reverse. In the neutral condition, the talker’s face had a closed mouth throughout. In the audiovisual violation condition, the mouth shape always matched the frequent vowel, even when the rare vowel was played. We hypothesized that in the neutral condition no long-term audiovisual memory traces for speech sounds would be activated. Therefore, the neural response elicited by deviants would reflect only a violation of the observed audiovisual sequence. In contrast, we expected that in the audiovisual violation condition, a long-term memory trace for the speech sound/lip configuration typical for the frequent vowel would be activated. In this condition then, the neural response elicited by rare sound changes would reflect a violation of not only observed audiovisual patterns but also of a long-term memory representation for how a given vowel looks when articulated. Children pressed a response button whenever they saw a talker’s face assume a silly expression. We found that in children with TD, rare auditory changes produced a significant mismatch negativity (MMN) event-related potential (ERP) component over the posterior scalp in the audiovisual violation condition but not in the neutral condition. In children with DLD, no MMN was present in either condition. Rare vowel changes elicited a significant P3 in both groups and conditions, indicating that all children noticed auditory changes. Our results suggest that children with TD, but not children with DLD, incorporate visual information into long-term phonemic representations and detect violations in audiovisual phonemic congruency even when they perform a task that is unrelated to phonemic processing.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, USA;
- Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2038, USA
- Correspondence: ; Tel.: +1-(765)-494-4233; Fax: +1-(765)-494-0771
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, USA;
| | - Sharon Christ
- Department of Statistics, Purdue University, 250 N. University Street, West Lafayette, IN 47907-2066, USA;
- Department of Human Development and Family Studies, Purdue University, 1202 West State Street, West Lafayette, IN 47907-2055, USA
| |
Collapse
|
34
|
Kim S, Schwalje AT, Liu AS, Gander PE, McMurray B, Griffiths TD, Choi I. Pre- and post-target cortical processes predict speech-in-noise performance. Neuroimage 2021; 228:117699. [PMID: 33387631 PMCID: PMC8291856 DOI: 10.1016/j.neuroimage.2020.117699] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/06/2020] [Accepted: 12/23/2020] [Indexed: 12/19/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.
Collapse
Affiliation(s)
- Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Adam T Schwalje
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Andrew S Liu
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA; Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Inyong Choi
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
35
|
Hendrickson K, Oleson J, Walker E. School-Age Children Adapt the Dynamics of Lexical Competition in Suboptimal Listening Conditions. Child Dev 2021; 92:638-649. [PMID: 33476043 DOI: 10.1111/cdev.13530] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Although the ability to understand speech in adverse listening conditions is paramount for effective communication across the life span, little is understood about how this critical processing skill develops. This study asks how the dynamics of spoken word recognition (i.e., lexical access and competition) change during soft speech in 8- to 11-year-olds (n = 26). Lexical competition and access for speech at lower intensity levels was measured using eye-tracking and the visual world paradigm. Overall the results suggest that soft speech influences the magnitude and timing of lexical access and competition. These results suggest that lexical competition is a cognitive process that can be adapted in the school-age years to help cope with increased uncertainty due to alterations in the speech signal.
Collapse
|
36
|
Sarrett ME, McMurray B, Kapnoula EC. Dynamic EEG analysis during language comprehension reveals interactive cascades between perceptual processing and sentential expectations. BRAIN AND LANGUAGE 2020; 211:104875. [PMID: 33086178 PMCID: PMC7682806 DOI: 10.1016/j.bandl.2020.104875] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 08/07/2020] [Accepted: 10/02/2020] [Indexed: 05/22/2023]
Abstract
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language processing cascades proceed on a millisecond-by-millisecond basis. Our results indicate: (1) perceptual processing and memory for fine-grained acoustics is preserved in brain activity for up to 900 msec; (2) contextual analysis begins early and is graded with respect to the acoustic signal; and (3) top-down predictions influence perceptual processing in some cases, however, these predictions are available simultaneously with the veridical signal. These mechanistic insights provide a basis for a better understanding of the cortical language network.
Collapse
Affiliation(s)
- McCall E Sarrett
- Interdisciplinary Graduate Program in Neuroscience, 356 Medical Research Center, University of Iowa, Iowa City, IA, 52242, United States.
| | - Bob McMurray
- Department of Psychological & Brain Sciences, W311 Seashore Hall, University of Iowa, Iowa City, IA, 52242, United States
| | - Efthymia C Kapnoula
- Department of Psychological & Brain Sciences, W311 Seashore Hall, University of Iowa, Iowa City, IA, 52242, United States; Basque Center on Cognition, Brain, & Language, Mikeletegi Pasealekua, 69, 20009 Donostia, Gipuzkoa, Spain
| |
Collapse
|
37
|
Lieberman AM, Borovsky A. Lexical Recognition in Deaf Children Learning American Sign Language: Activation of Semantic and Phonological Features of Signs. LANGUAGE LEARNING 2020; 70:935-973. [PMID: 33510545 PMCID: PMC7837603 DOI: 10.1111/lang.12409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Children learning language efficiently process single words, and activate semantic, phonological, and other features of words during recognition. We investigated lexical recognition in deaf children acquiring American Sign Language (ASL) to determine how perceiving language in the visual-spatial modality affects lexical recognition. Twenty native- or early-exposed signing deaf children (ages 4 to 8 years) participated in a visual world eye-tracking study. Children were presented with a single ASL sign, target picture, and three competitor pictures that varied in their phonological and semantic relationship to the target. Children shifted gaze to the target picture shortly after sign offset. Children showed robust evidence for activation of semantic but not phonological features of signs, however in their behavioral responses children were most susceptible to phonological competitors. Results demonstrate that single word recognition in ASL is largely parallel to spoken language recognition among children who are developing a mature lexicon.
Collapse
Affiliation(s)
- Amy M Lieberman
- Language and Literacy Department, Wheelock College of Education and Human Development, Boston University, 2 Silber Way, Boston, MA 02215
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2122
| |
Collapse
|
38
|
Key AP, Venker CE, Sandbank MP. Psychophysiological and Eye-Tracking Markers of Speech and Language Processing in Neurodevelopmental Disorders: New Options for Difficult-to-Test Populations. AMERICAN JOURNAL ON INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2020; 125:465-474. [PMID: 33211813 PMCID: PMC8011582 DOI: 10.1352/1944-7558-125.6.465] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 07/01/2020] [Indexed: 06/02/2023]
Abstract
It can be challenging to accurately assess speech and language processing in preverbal or minimally verbal individuals with neurodevelopmental disabilities (NDD) using standardized behavioral tools. Event-related potential and eye tracking methods offer novel means to objectively document receptive language processing without requiring purposeful behavioral responses. Working around many of the cognitive, motor, or social difficulties in NDDs, these tools allow for minimally invasive, passive assessment of language processing and generate continuous scores that may have utility as biomarkers of individual differences and indicators of treatment effectiveness. Researchers should consider including physiological measures in assessment batteries to allow for more precise capture of language processing in individuals for whom it may not behaviorally apparent.
Collapse
|
39
|
Malins JG, Landi N, Ryherd K, Frijters JC, Magnuson JS, Rueckl JG, Pugh KR, Sevcik R, Morris R. Is that a pibu or a pibo? Children with reading and language deficits show difficulties in learning and overnight consolidation of phonologically similar pseudowords. Dev Sci 2020; 24:e13023. [PMID: 32691904 PMCID: PMC7988620 DOI: 10.1111/desc.13023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Revised: 07/04/2020] [Accepted: 07/11/2020] [Indexed: 01/24/2023]
Abstract
Word learning is critical for the development of reading and language comprehension skills. Although previous studies have indicated that word learning is compromised in children with reading disability (RD) or developmental language disorder (DLD), it is less clear how word learning difficulties manifest in children with comorbid RD and DLD. Furthermore, it is unclear whether word learning deficits in RD or DLD include difficulties with offline consolidation of newly learned words. In the current study, we employed an artificial lexicon learning paradigm with an overnight design to investigate how typically developing (TD) children (N = 25), children with only RD (N = 93), and children with both RD and DLD (N = 34) learned and remembered a set of phonologically similar pseudowords. Results showed that compared to TD children, children with RD exhibited: (i) slower growth in discrimination accuracy for cohort item pairs sharing an onset (e.g. pibu‐pibo), but not for rhyming item pairs (e.g. pibu‐dibu); and (ii) lower discrimination accuracy for both cohort and rhyme item pairs on Day 2, even when accounting for differences in Day 1 learning. Moreover, children with comorbid RD and DLD showed learning and retention deficits that extended to unrelated item pairs that were phonologically dissimilar (e.g. pibu‐tupa), suggestive of broader impairments compared to children with only RD. These findings provide insights into the specific learning deficits underlying RD and DLD and motivate future research concerning how children use phonological similarity to guide the organization of new word knowledge.
Collapse
Affiliation(s)
- Jeffrey G Malins
- Department of Psychology, Georgia State University, Atlanta, GA, USA.,Haskins Laboratories, New Haven, CT, USA
| | - Nicole Landi
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Kayleigh Ryherd
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Jan C Frijters
- Faculty of Social Sciences, Department of Child and Youth Studies, Brock University, St. Catharines, ON, Canada
| | - James S Magnuson
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Jay G Rueckl
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Kenneth R Pugh
- Haskins Laboratories, New Haven, CT, USA.,Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA.,Department of Linguistics, Yale University, New Haven, CT, USA.,Department of Diagnostic Radiology, Yale University School of Medicine, New Haven, CT, USA
| | - Rose Sevcik
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Robin Morris
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
40
|
Xue J, Li B, Yan R, Gruen JR, Feng T, Joanisse MF, Malins JG. The temporal dynamics of first and second language processing: ERPs to spoken words in Mandarin-English bilinguals. Neuropsychologia 2020; 146:107562. [PMID: 32682798 DOI: 10.1016/j.neuropsychologia.2020.107562] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 07/12/2020] [Accepted: 07/13/2020] [Indexed: 10/23/2022]
Abstract
The dynamics of bilingual spoken word recognition remain poorly characterized, especially for individuals who speak two languages that are highly dissimilar in their phonological and morphological structure. The present study compared first language (L1) and second language (L2) spoken word processing within a group of adult Mandarin-English bilinguals (N = 34; ages 18-25). Event-related potentials (ERPs) were recorded while participants completed the same cross-modal matching task separately in their L1 Mandarin and L2 English. This task consisted of deciding whether spoken words matched pictures of items. Pictures and spoken words either matched (e.g., Mandarin: TANG2-tang2; English: BELL-bell), or differed in word-initial phonemes (e.g., Mandarin: TANG2-lang2; English: BELL-shell), word-final phonemes (e.g., Mandarin: TANG2-tao2; English: BELL-bed), or whole words (e.g., Mandarin: TANG2-xia1: English: BELL-ham). Each mismatch type was associated with a pattern of modulation of the Phonological Mapping Negativity, the N400, and the Late N400 that was distinct from those of the other mismatch types yet similar between the two languages. This was interpreted as evidence of incremental processing with similar temporal dynamics in both languages. These findings support models of spoken word recognition in bilingual individuals that adopt an interactive-activation framework for both L1 and L2 processing.
Collapse
Affiliation(s)
- Jin Xue
- University of Science and Technology Beijing, School of Foreign Studies, 30 Xueyuan Road, Haidian District, Beijing, 100083, China
| | - Banban Li
- University of Science and Technology Beijing, School of Foreign Studies, 30 Xueyuan Road, Haidian District, Beijing, 100083, China
| | - Rong Yan
- Institute of Leadership and Education Advanced Development, Xi'an Jiaotong-Liverpool University, Suzhou, 215123, China
| | - Jeffrey R Gruen
- Yale University School of Medicine, Department of Pediatrics, Yale Child Health Research Center, 464 Congress Avenue, New Haven, CT, 06520, USA; Yale University School of Medicine, Department of Genetics, 333 Cedar Street, New Haven, CT, 06520, USA
| | - Tianli Feng
- Beijing International Studies University, School of English Language, Literature and Culture, 1 Dingfuzhuan Nanli, Chaoyang District, Beijing, 100024, China
| | - Marc F Joanisse
- The University of Western Ontario, Department of Psychology & Brain and Mind Institute, Western Interdisciplinary Research Building, London, N6A 3K7, Canada; Haskins Laboratories, 300 George St. Suite 900, New Haven, CT, 06511, USA
| | - Jeffrey G Malins
- Yale University School of Medicine, Department of Pediatrics, Yale Child Health Research Center, 464 Congress Avenue, New Haven, CT, 06520, USA; Haskins Laboratories, 300 George St. Suite 900, New Haven, CT, 06511, USA; Georgia State University, Department of Psychology, P.O. Box 5010, Atlanta, GA, 30302, USA.
| |
Collapse
|
41
|
Peng ZE, Kan A, Litovsky RY. Development of Binaural Sensitivity: Eye Gaze as a Measure of Real-time Processing. Front Syst Neurosci 2020; 14:39. [PMID: 32733212 PMCID: PMC7360356 DOI: 10.3389/fnsys.2020.00039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 05/27/2020] [Indexed: 11/13/2022] Open
Abstract
Children localize sounds using binaural cues when navigating everyday auditory environments. While sensitivity to binaural cues reaches maturity by 8-10 years of age, large individual variability has been observed in the just-noticeable-difference (JND) thresholds for interaural time difference (ITD) among children in this age range. To understand the development of binaural sensitivity beyond JND thresholds, the "looking-while-listening" paradigm was adapted in this study to reveal the real-time decision-making behavior during ITD processing. Children ages 8-14 years with normal hearing (NH) and a group of young NH adults were tested. This novel paradigm combined eye gaze tracking with behavioral psychoacoustics to estimate ITD JNDs in a two-alternative forced-choice discrimination task. Results from simultaneous eye gaze recordings during ITD processing suggested that children had adult-like ITD JNDs, but they demonstrated immature decision-making strategies. While the time course of arriving at the initial fixation and final decision in providing a judgment of the ITD direction was similar, children exhibited more uncertainty than adults during decision-making. Specifically, children made more fixation changes, particularly when tested using small ITD magnitudes, between the target and non-target response options prior to finalizing a judgment. These findings suggest that, while children may exhibit adult-like sensitivity to ITDs, their eye gaze behavior reveals that the processing of this binaural cue is still developing through late childhood.
Collapse
Affiliation(s)
- Z. Ellen Peng
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
| | - Alan Kan
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
- School of Engineering, Macquarie University, Sydney, NSW, Australia
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
42
|
Bhandari P, Prasad S, Mishra RK. High proficient bilinguals bring in higher executive control when encountering diverse interlocutors. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2020. [DOI: 10.1007/s41809-020-00060-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
43
|
McDonald M, Kaushanskaya M. Factors modulating cross-linguistic co-activation in bilinguals. JOURNAL OF PHONETICS 2020; 81:100981. [PMID: 32699456 PMCID: PMC7375413 DOI: 10.1016/j.wocn.2020.100981] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Activation of both of a bilingual's languages during auditory word recognition has been widely documented. Here, we argue that if parallel activation in bilinguals is the result of a bottom-up process where phonetic features that overlap across two languages activate both linguistic systems, then the robustness of such parallel activation is in fact surprising. This is because phonemes across two different languages are rarely perfectly matched to each other in phonetic features. For instance, across Spanish and English, a "voiced" stop is realized in phonetically-distinct ways, and therefore, words that begin with voiced stops in English do not in fact fully overlap in phonetic features with words in Spanish. In two eye-tracking experiments using a visual world paradigm, we examined the effect of a phonemic match (English /b/ matched to Spanish /b/) vs. a phonetic match (English /b/ matched to Spanish /p/) on cross-linguistic co-activation (English words co-activating Spanish) in Spanish L1 and in Spanish L2 speakers. We found that while phonemic matching induced co-activation in both Spanish L1 and Spanish L2 speakers, phonetic matching did not. Together, these results indicate that co-activation of two languages in bilinguals may proceed through activation of categorical phonemic information rather than through activation of phonetic features.
Collapse
|
44
|
Zhao L, Yuan S, Guo Y, Wang S, Chen C, Zhang S. Inhibitory control is associated with the activation of output-driven competitors in a spoken word recognition task. The Journal of General Psychology 2020; 149:1-28. [PMID: 32462997 DOI: 10.1080/00221309.2020.1771675] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Although lexical competition has been ubiquitously observed in spoken word recognition, less has been known about whether the lexical competitors interfere with the recognition of the target and how lexical interference is resolved. The present study examined whether lexical competitors overlapping in output with the target would interfere with its recognition, and tested an underestimated hypothesis that the domain-general inhibitory control contributes to the resolution of lexical interference. Specifically, in this study, a Visual World Paradigm was used to access the temporal dynamics of lexical activations when participants were moving the mouse cursor to the written word form of the spoken word they heard. By using Chinese characters, the orthographic similarity between the lexical competitor and target was manipulated independently of their phonological overlap. The results demonstrated that behavioral performance in the similar condition was poorer compared to that in the control condition, and that individuals with better inhibitory control (having smaller Stroop interference effect) exhibited weaker activation of orthographic competitors (mouse trajectories less attracted by the orthographic competitors). The implications of these findings for our understanding of lexical interference and its resolution in spoken word recognition were discussed.
Collapse
Affiliation(s)
- Libo Zhao
- Department of Psychology, BeiHang University, Beijing, China
| | - Shanshan Yuan
- Department of Psychology, BeiHang University, Beijing, China
| | - Ying Guo
- Department of Psychology, BeiHang University, Beijing, China
| | - Shan Wang
- Department of Psychology, BeiHang University, Beijing, China
| | - Chuansheng Chen
- Department of Psychology and Social Behavior, University of California, Irvine, CA, USA
| | - Shudong Zhang
- Faculty of Education, Beijing Normal University, Beijing, China
| |
Collapse
|
45
|
Galle ME, Klein-Packard J, Schreiber K, McMurray B. What Are You Waiting For? Real-Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory. Cogn Sci 2020; 43. [PMID: 30648798 DOI: 10.1111/cogs.12700] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 10/15/2018] [Accepted: 10/25/2018] [Indexed: 11/30/2022]
Abstract
Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early cues and then updated when more information arises. These studies have uniformly shown evidence for immediate integration for a variety of phonetic distinctions. We attempted to extend this to fricatives, a class of speech sounds which requires not only temporal integration of asynchronous cues (the frication, followed by the formant transitions 150-350 ms later), but also integration across different frequency bands and compensation for contextual factors like coarticulation. Eye movements in the visual world paradigm showed clear evidence for a memory buffer. Results were replicated in five experiments, ruling out methodological factors and tying the release of the buffer to the onset of the vowel. These findings support a general auditory account for speech by suggesting that the acoustic nature of particular speech sounds may have large effects on how they are processed. It also has major implications for theories of auditory and speech perception by raising the possibility of an encapsulated memory buffer in early auditory processing.
Collapse
Affiliation(s)
- Marcus E Galle
- Department of Psychological and Brain Sciences, University of Iowa
| | | | | | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa.,Department of Communication Sciences and Disorders, University of Iowa.,Department of Linguistics, University of Iowa.,Department of Otolaryngology, University of Iowa
| |
Collapse
|
46
|
Cho SJ, Brown-Schmidt S, Boeck PD, Shen J. Modeling Intensive Polytomous Time-Series Eye-Tracking Data: A Dynamic Tree-Based Item Response Model. PSYCHOMETRIKA 2020; 85:154-184. [PMID: 32086751 DOI: 10.1007/s11336-020-09694-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Indexed: 05/28/2023]
Abstract
This paper presents a dynamic tree-based item response (IRTree) model as a novel extension of the autoregressive generalized linear mixed effect model (dynamic GLMM). We illustrate the unique utility of the dynamic IRTree model in its capability of modeling differentiated processes indicated by intensive polytomous time-series eye-tracking data. The dynamic IRTree was inspired by but is distinct from the dynamic GLMM which was previously presented by Cho, Brown-Schmidt, and Lee (Psychometrika 83(3):751-771, 2018). Unlike the dynamic IRTree, the dynamic GLMM is suitable for modeling intensive binary time-series eye-tracking data to identify visual attention to a single interest area over all other possible fixation locations. The dynamic IRTree model is a general modeling framework which can be used to model change processes (trend and autocorrelation) and which allows for decomposing data into various sources of heterogeneity. The dynamic IRTree model was illustrated using an experimental study that employed the visual-world eye-tracking technique. The results of a simulation study showed that parameter recovery of the model was satisfactory and that ignoring trend and autoregressive effects resulted in biased estimates of experimental condition effects in the same conditions found in the empirical study.
Collapse
Affiliation(s)
| | | | - Paul De Boeck
- The Ohio State University, Columbus, USA
- KU Leuven, Leuven, Belgium
| | | |
Collapse
|
47
|
Hendrickson K, Spinelli J, Walker E. Cognitive processes underlying spoken word recognition during soft speech. Cognition 2020; 198:104196. [PMID: 32004934 DOI: 10.1016/j.cognition.2020.104196] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/06/2020] [Accepted: 01/18/2020] [Indexed: 11/25/2022]
Abstract
In two eye-tracking experiments using the Visual World Paradigm, we examined how listeners recognize words when faced with speech at lower intensities (40, 50, and 65 dBA). After hearing the target word, participants (n = 32) clicked the corresponding picture from a display of four images - a target (e.g., money), a cohort competitor (e.g., mother), a rhyme competitor (e.g., honey) and an unrelated item (e.g., whistle) - while their eye-movements were tracked. For slightly soft speech (50 dBA), listeners demonstrated an increase in cohort activation, whereas for rhyme competitors, activation started later and was sustained longer in processing. For very soft speech (40 dBA), listeners waited until later in processing to activate potential words, as illustrated by a decrease in activation for cohorts, and an increase in activation for rhymes. Further, the extent to which words were considered depended on word length (mono- vs. bi-syllabic words), and speech-extrinsic factors such as the surrounding listening environment. These results advance current theories of spoken word recognition by considering a range of speech levels more typical of everyday listening environments. From an applied perspective, these results motivate models of how individuals who are hard of hearing approach the task of recognizing spoken words.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America; Department of Psychological & Brain Sciences, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America.
| | - Jessica Spinelli
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America.
| | - Elizabeth Walker
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America.
| |
Collapse
|
48
|
McMurray B, Ellis TP, Apfelbaum KS. How Do You Deal With Uncertainty? Cochlear Implant Users Differ in the Dynamics of Lexical Processing of Noncanonical Inputs. Ear Hear 2020; 40:961-980. [PMID: 30531260 PMCID: PMC6551335 DOI: 10.1097/aud.0000000000000681] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. DESIGN Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. RESULTS All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. CONCLUSIONS Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.
Collapse
Affiliation(s)
- Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Tyler P Ellis
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Keith S Apfelbaum
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
- Foundations in Learning, Inc., Coralville, Iowa, USA
| |
Collapse
|
49
|
Brown-Schmidt S, Naveiras M, De Boeck P, Cho SJ. Statistical modeling of intensive categorical time-series eye-tracking data using dynamic generalized linear mixed-effect models with crossed random effects. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
50
|
Hall JE, Owen Van Horne A, Farmer TA. Individual Differences in Verb Bias Sensitivity in Children and Adults With Developmental Language Disorder. Front Hum Neurosci 2019; 13:402. [PMID: 31803036 PMCID: PMC6877742 DOI: 10.3389/fnhum.2019.00402] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 10/28/2019] [Indexed: 12/02/2022] Open
Abstract
A number of experiments support the hypothetical utility of statistical information for language learning and processing among both children and adults. However, tasks in these studies are often very general, and only a few include populations with developmental language disorder (DLD). We wanted to determine whether a stronger relationship might be shown when the measure of statistical learning is chosen for its relevance to the language task when including a substantial number of participants with DLD. The language ability we measured was sensitivity to verb bias - the likelihood of a verb to appear with a certain argument or interpretation. A previous study showed adults with DLD were less sensitive to verb bias than their typical peers. Verb bias sensitivity had not yet been tested in children with DLD. In Study 1, 49 children, ages 7-9 years, 17 of whom were classified as having DLD, completed a task designed to measure sensitivity to verb bias through implicit and explicit measures. We found children with and without DLD showed sensitivity to verb bias in implicit but not explicit measures, with no differences between groups. In Study 2, we used a multiverse approach to investigate whether individual differences in statistical learning predicted verb bias sensitivity in these participants as well as in a dataset of adult participants. Our analysis revealed no evidence of a relationship between statistical learning and verb bias sensitivity in children, which was not unexpected given we found no group differences in Study 1. Statistical learning predicted sensitivity to verb bias as measured through explicit measures in adults, though results were not robust. These findings suggest that verb bias may still be relatively unstable in school age children, and thus may not play the same role in sentence processing in children as in adults. It would also seem that individuals with DLD may not be using the same mechanisms during processing as their typically developing (TD) peers in adulthood. Thus, statistical information may differ in relevance for language processing in individuals with and without DLD.
Collapse
Affiliation(s)
- Jessica E. Hall
- Speech, Language, and Hearing Sciences, The University of Arizona, Tucson, AZ, United States
| | - Amanda Owen Van Horne
- Communication Sciences and Disorders, University of Delaware, Newark, DE, United States
| | - Thomas A. Farmer
- Department of Psychology, California State University, Fullerton, Fullerton, CA, United States
| |
Collapse
|