1
|
Sagarra N, Casillas JV. Practice beats age: co-activation shapes heritage speakers' lexical access more than age of onset. Front Psychol 2023; 14:1141174. [PMID: 37377705 PMCID: PMC10292756 DOI: 10.3389/fpsyg.2023.1141174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/04/2023] [Indexed: 06/29/2023] Open
Abstract
Probabilistic associations make language processing efficient and are honed through experience. However, it is unclear what language experience factors explain the non-monolingual processing behaviors typical of L2 learners and heritage speakers (HSs). We investigated whether AoO, language proficiency, and language use affect the recognition of Spanish stress-tense suffix associations involving a stressed syllable that cues a present suffix (SALta "s/he jumps") and an unstressed syllable that cues a past suffix (SALtó "s/he jumped"). Adult Spanish-English HSs, English-Spanish L2 learners, and Spanish monolinguals saw a paroxytone verb (stressed initial syllable) and an oxytone verb (unstressed initial syllable), listened to a sentence containing one of the verbs, and chose the one they heard. Spanish proficiency measured grammatical and lexical knowledge, and Spanish use assessed percentage of current usage. Both bilingual groups were comparable in Spanish proficiency and use. Eye-tracking data showed that all groups fixated on target verbs above chance before hearing the syllable containing the suffix, except the HSs in the oxytones. Monolinguals fixated on targets more and earlier, although at a slower rate, than HSs and L2 learners; in turn, HSs fixated on targets more and earlier than L2 learners, except in oxytones. Higher proficiency increased target fixations in HSs (oxytones) and L2 learners (paroxytones), but greater use only increased target fixations in HSs (oxytones). Taken together, our data show that HSs' lexical access depends more on number of lexical competitors (co-activation of two L1 lexica) and type (phonotactic) frequency than token (lexical) frequency or AoO. We discuss the contribution of these findings to models in phonology, lexical access, language processing, language prediction, and human cognition.
Collapse
|
2
|
Sá-Leite AR, Comesaña M, Acuña-Fariña C, Fraga I. A cautionary note on the studies using the picture-word interference paradigm: the unwelcome consequences of the random use of "in/animates". Front Psychol 2023; 14:1145884. [PMID: 37213376 PMCID: PMC10196210 DOI: 10.3389/fpsyg.2023.1145884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/13/2023] [Indexed: 05/23/2023] Open
Abstract
The picture-word interference (PWI) paradigm allows us to delve into the process of lexical access in language production with great precision. It creates situations of interference between target pictures and superimposed distractor words that participants must consciously ignore to name the pictures. Yet, although the PWI paradigm has offered numerous insights at all levels of lexical representation, in this work we expose an extended lack of control regarding the variable animacy. Animacy has been shown to have a great impact on cognition, especially when it comes to the mechanisms of attention, which are highly biased toward animate entities to the detriment of inanimate objects. Furthermore, animate nouns have been shown to be semantically richer and prioritized during lexical access, with effects observable in multiple psycholinguistic tasks. Indeed, not only does the performance on a PWI task directly depend on the different stages of lexical access to nouns, but also attention has a fundamental role in it, as participants must focus on targets and ignore interfering distractors. We conducted a systematic review with the terms "picture-word interference paradigm" and "animacy" in the databases PsycInfo and Psychology Database. The search revealed that only 12 from a total of 193 PWI studies controlled for animacy, and only one considered it as a factor in the design. The remaining studies included animate and inanimate stimuli in their materials randomly, sometimes in a very disproportionate amount across conditions. We speculate about the possible impact of this uncontrolled variable mixing on many types of effects within the framework of multiple theories, namely the Animate Monitoring Hypothesis, the WEAVER++ model, and the Independent Network Model in an attempt to fuel the theoretical debate on this issue as well as the empirical research to turn speculations into knowledge.
Collapse
Affiliation(s)
- Ana Rita Sá-Leite
- Cognitive Processes and Behavior Research Group, Department of Social Psychology, Basic Psychology, and Methodology, University of Santiago de Compostela, Santiago de Compostela, Spain
- Institut für Romanische Sprachen und Literaturen, Goethe University Frankfurt, Frankfurt, Germany
- *Correspondence: Ana Rita Sá-Leite
| | - Montserrat Comesaña
- Psycholinguistics Research Line, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Carlos Acuña-Fariña
- Cognitive Processes and Behavior Research Group, Department of English and German, University of Santiago de Compostela, Santiago de Compostela, Spain
| | - Isabel Fraga
- Cognitive Processes and Behavior Research Group, Department of Social Psychology, Basic Psychology, and Methodology, University of Santiago de Compostela, Santiago de Compostela, Spain
| |
Collapse
|
3
|
Anderson EJ, Midgley KJ, Holcomb PJ, Riès SK. Taxonomic and thematic semantic relationships in picture naming as revealed by Laplacian-transformed event-related potentials. Psychophysiology 2022; 59:e14091. [PMID: 35554943 PMCID: PMC9788343 DOI: 10.1111/psyp.14091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 03/30/2022] [Accepted: 04/20/2022] [Indexed: 12/31/2022]
Abstract
Semantically related concepts co-activate when we speak. Prior research reported both behavioral interference and facilitation due to co-activation during picture naming. Different word relationships may account for some of this discrepancy. Taxonomically related words (e.g., WOLF-DOG) have been associated with semantic interference; thematically related words (e.g., BONE-DOG) have been associated with facilitation. Although these different semantic relationships have been associated with opposite behavioral outcomes, electrophysiological studies have found inconsistent effects on event-related potentials. We conducted a picture-word interference electroencephalography experiment to examine word retrieval dynamics in these different semantic relationships. Importantly, we used traditional monopolar analysis as well as Laplacian transformation allowing us to examine spatially deblurred event-related components. Both analyses revealed greater negativity (150-250 ms) for unrelated than related taxonomic pairs, though more restricted in space for thematic pairs. Critically, Laplacian analyses revealed a larger negative-going component in the 300 to 500 ms time window in taxonomically related versus unrelated pairs which were restricted to a left frontal recording site. In parallel, an opposite effect was found in the same time window but localized to a left parietal site. Finding these opposite effects in the same time window was feasible thanks to the use of the Laplacian transformation and suggests that frontal control processes are concurrently engaged with cascading effects of the spread of activation through semantically related representations.
Collapse
Affiliation(s)
- Elizabeth J. Anderson
- Joint Doctoral Program in Language and Communicative DisordersSan Diego State UniversitySan DiegoCaliforniaUSA
- Joint Doctoral Program in Language and Communicative DisordersUniversity of California San DiegoLa JollaCaliforniaUSA
| | | | - Phillip J. Holcomb
- Department of PsychologySan Diego State UniversitySan DiegoCaliforniaUSA
| | - Stephanie K. Riès
- School of Speech, Language, and Hearing SciencesSan Diego State UniversitySan DiegoCaliforniaUSA
| |
Collapse
|
4
|
Hänel-Faulhaber B, Groen MA, Röder B, Friedrich CK. Ongoing Sign Processing Facilitates Written Word Recognition in Deaf Native Signing Children. Front Psychol 2022; 13:917700. [PMID: 35992405 PMCID: PMC9390089 DOI: 10.3389/fpsyg.2022.917700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 06/24/2022] [Indexed: 11/13/2022] Open
Abstract
Signed and written languages are intimately related in proficient signing readers. Here, we tested whether deaf native signing beginning readers are able to make rapid use of ongoing sign language to facilitate recognition of written words. Deaf native signing children (mean 10 years, 7 months) received prime target pairs with sign word onsets as primes and written words as targets. In a control group of hearing children (matched in their reading abilities to the deaf children, mean 8 years, 8 months), spoken word onsets were instead used as primes. Targets (written German words) either were completions of the German signs or of the spoken word onsets. Task of the participants was to decide whether the target word was a possible German word. Sign onsets facilitated processing of written targets in deaf children similarly to spoken word onsets facilitating processing of written targets in hearing children. In both groups, priming elicited similar effects in the simultaneously recorded event related potentials (ERPs), starting as early as 200 ms after the onset of the written target. These results suggest that beginning readers can use ongoing lexical processing in their native language - be it signed or spoken - to facilitate written word recognition. We conclude that intimate interactions between sign and written language might in turn facilitate reading acquisition in deaf beginning readers.
Collapse
Affiliation(s)
| | | | - Brigitte Röder
- Biological Psychology and Neuropsychology, Universität Hamburg, Hamburg, Germany
| | - Claudia K. Friedrich
- Department of Developmental Psychology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
5
|
Declerck M, Meade G, Midgley KJ, Holcomb PJ, Roelofs A, Emmorey K. Language control in bimodal bilinguals: Evidence from ERPs. Neuropsychologia 2021; 161:108019. [PMID: 34487737 DOI: 10.1016/j.neuropsychologia.2021.108019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 09/02/2021] [Accepted: 09/02/2021] [Indexed: 11/29/2022]
Abstract
It is currently unclear to what degree language control, which minimizes non-target language interference and increases the probability of selecting target-language words, is similar for sign-speech (bimodal) bilinguals and spoken language (unimodal) bilinguals. To further investigate the nature of language control processes in bimodal bilinguals, we conducted the first event-related potential (ERP) language switching study with hearing American Sign Language (ASL)-English bilinguals. The results showed a pattern that has not been observed in any unimodal language switching study: a switch-related positivity over anterior sites and a switch-related negativity over posterior sites during ASL production in both early and late time windows. No such pattern was found during English production. We interpret these results as evidence that bimodal bilinguals uniquely engage language control at the level of output modalities.
Collapse
Affiliation(s)
- Mathieu Declerck
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, USA; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands; Linguistics and Literary Studies, Vrije Universiteit Brussel, Brussels, Belgium
| | - Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, USA
| | | | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, USA
| | - Ardi Roelofs
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, USA.
| |
Collapse
|
6
|
Połczyńska MM, Bookheimer SY. General principles governing the amount of neuroanatomical overlap between languages in bilinguals. Neurosci Biobehav Rev 2021; 130:1-14. [PMID: 34400175 PMCID: PMC8958881 DOI: 10.1016/j.neubiorev.2021.08.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 07/30/2021] [Accepted: 08/08/2021] [Indexed: 11/28/2022]
Abstract
The literature has identified many important factors affecting the extent to which languages in bilinguals rely on the same neural populations in the specific brain region. The factors include the age of acquisition of the second language (L2), proficiency level of the first language (L1) and L2, and the amount of language exposure, among others. What is lacking is a set of global principles that explain how the many factors relate to the degree to which languages overlap neuroanatomically in bilinguals. We are offering a set of such principles that together account for the numerous sources of data that have been examined individually but not collectively: (1) the principle of acquisition similarity between L1 and L2, (2) the principle of linguistic similarity between L1 and L2, and (3) the principle of cognitive control and effort. Referencing the broad characteristics of language organization in bilinguals, as presented by the principles, can provide a roadmap for future clinical and basic science research.
Collapse
Affiliation(s)
- Monika M Połczyńska
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine at UCLA, University of California, Los Angeles, CA, USA.
| | - Susan Y Bookheimer
- Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine at UCLA, University of California, Los Angeles, CA, USA.
| |
Collapse
|
7
|
Hoshino N, Beatty-Martínez AL, Navarro-Torres CA, Kroll JF. Do Cross-Language Script Differences Enable Bilinguals to Function Selectively When Speaking in One Language Alone? FRONTIERS IN COMMUNICATION 2021; 6:668381. [PMID: 35419452 PMCID: PMC9004719 DOI: 10.3389/fcomm.2021.668381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The present study examined the role of script in bilingual speech planning by comparing the performance of same and different-script bilinguals. Spanish-English bilinguals (Experiment 1) and Japanese-English bilinguals (Experiment 2) performed a picture-word interference task in which they were asked to name a picture of an object in English, their second language, while ignoring a visual distractor word in Spanish or Japanese, their first language. Results replicated the general pattern seen in previous bilingual picture-word interference studies for the same-script, Spanish-English bilinguals but not for the different-script, Japanese-English bilinguals. Both groups showed translation facilitation, whereas only Spanish-English bilinguals demonstrated semantic interference, phonological facilitation, and phono-translation facilitation. These results suggest that when the script of the language not in use is present in the task, bilinguals appear to exploit the perceptual difference as a language cue to direct lexical access to the intended language earlier in the process of speech planning.
Collapse
Affiliation(s)
| | | | | | - Judith F. Kroll
- Department of Language Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
8
|
Manhardt F, Brouwer S, Özyürek A. A Tale of Two Modalities: Sign and Speech Influence Each Other in Bimodal Bilinguals. Psychol Sci 2021; 32:424-436. [PMID: 33621474 DOI: 10.1177/0956797620968789] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals' expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals' speech and signs are shaped by two languages from different modalities.
Collapse
Affiliation(s)
| | | | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Donders Center for Cognition, Radboud University
| |
Collapse
|
9
|
Emmorey K, Mott M, Meade G, Holcomb PJ, Midgley KJ. Lexical selection in bimodal bilinguals: ERP evidence from picture-word interference. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:840-853. [PMID: 34485589 PMCID: PMC8411899 DOI: 10.1080/23273798.2020.1821905] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 09/04/2020] [Indexed: 06/13/2023]
Abstract
The picture word interference (PWI) paradigm and ERPs were used to investigate whether lexical selection in deaf and hearing ASL-English bilinguals occurs via lexical competition or whether the response exclusion hypothesis (REH) for PWI effects is supported. The REH predicts that semantic interference should not occur for bimodal bilinguals because sign and word responses do not compete within an output buffer. Bimodal bilinguals named pictures in ASL, preceded by either a translation equivalent, semantically-related, or unrelated English written word. In both the translation and semantically-related conditions bimodal bilinguals showed facilitation effects: reduced RTs and N400 amplitudes for related compared to unrelated prime conditions. We also observed an unexpected focal left anterior positivity that was stronger in the translation condition, which we speculate may be due to articulatory priming. Overall, the results support the REH and models of bilingual language production that assume lexical selection occurs without competition between languages.
Collapse
Affiliation(s)
- Karen Emmorey
- Corresponding author: Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, Suite 200, San Diego, CA 92120,
| | - Megan Mott
- Psychology Department, San Diego State University
| | - Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University, University of California, San Diego
| | | | | |
Collapse
|
10
|
Riès SK, Nadalet L, Mickelsen S, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Pre-output Language Monitoring in Sign Production. J Cogn Neurosci 2020; 32:1079-1091. [PMID: 32027582 PMCID: PMC7234262 DOI: 10.1162/jocn_a_01542] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture-word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Karen Emmorey
- San Diego State University
- University of California, San Diego
| |
Collapse
|
11
|
Zhao L, Yuan S, Guo Y, Wang S, Chen C, Zhang S. Inhibitory control is associated with the activation of output-driven competitors in a spoken word recognition task. The Journal of General Psychology 2020; 149:1-28. [PMID: 32462997 DOI: 10.1080/00221309.2020.1771675] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Although lexical competition has been ubiquitously observed in spoken word recognition, less has been known about whether the lexical competitors interfere with the recognition of the target and how lexical interference is resolved. The present study examined whether lexical competitors overlapping in output with the target would interfere with its recognition, and tested an underestimated hypothesis that the domain-general inhibitory control contributes to the resolution of lexical interference. Specifically, in this study, a Visual World Paradigm was used to access the temporal dynamics of lexical activations when participants were moving the mouse cursor to the written word form of the spoken word they heard. By using Chinese characters, the orthographic similarity between the lexical competitor and target was manipulated independently of their phonological overlap. The results demonstrated that behavioral performance in the similar condition was poorer compared to that in the control condition, and that individuals with better inhibitory control (having smaller Stroop interference effect) exhibited weaker activation of orthographic competitors (mouse trajectories less attracted by the orthographic competitors). The implications of these findings for our understanding of lexical interference and its resolution in spoken word recognition were discussed.
Collapse
Affiliation(s)
- Libo Zhao
- Department of Psychology, BeiHang University, Beijing, China
| | - Shanshan Yuan
- Department of Psychology, BeiHang University, Beijing, China
| | - Ying Guo
- Department of Psychology, BeiHang University, Beijing, China
| | - Shan Wang
- Department of Psychology, BeiHang University, Beijing, China
| | - Chuansheng Chen
- Department of Psychology and Social Behavior, University of California, Irvine, CA, USA
| | - Shudong Zhang
- Faculty of Education, Beijing Normal University, Beijing, China
| |
Collapse
|
12
|
Robinson Anthony JJD, Blumenfeld HK, Potapova I, Pruitt-Lord SL. Language dominance predicts cognate effects and metalinguistic awareness in preschool bilinguals. INTERNATIONAL JOURNAL OF BILINGUAL EDUCATION AND BILINGUALISM 2020; 25:922-941. [PMID: 35399223 PMCID: PMC8992601 DOI: 10.1080/13670050.2020.1735990] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 02/11/2020] [Indexed: 06/14/2023]
Abstract
The current work investigates whether language dominance predicts transfer of skills across cognitive-linguistic levels from the native language (Spanish) to the second language (English) in bilingual preschoolers. Sensitivity to cognates (elephant/elefante in English/Spanish) and metalinguistic awareness (MLA) have both been shown to transfer from the dominant to the nondominant language. Examining these types of transfer together using a continuous measure of language dominance may allow us to better understand the effect of the home language in children learning a majority language in preschool. Forty-six preschool-aged, Spanish-English bilinguals completed English receptive vocabulary and metalinguistic tasks indexing cognate effects and MLA. Language dominance was found to predict crosslinguistic (cognate) facilitation from Spanish to English. In addition, MLA skills also transferred from Spanish to English for children with lower English proficiency, and no transfer of MLA was evident for children with higher English proficiency. Altogether, findings suggest that transfer from a dominant first language to a nondominant second language happens at linguistic and cognitive-linguistic levels in preschoolers, although possibly influenced by second language proficiency. The current study has implications for supporting the home language for holistic cognitive-linguistic development.
Collapse
Affiliation(s)
- Jonathan J D Robinson Anthony
- San Diego State University/ University of California, San Diego Joint Doctoral Program in Language & Communicative Disorders
| | | | | | | |
Collapse
|
13
|
Emmorey K, Li C, Petrich J, Gollan TH. Turning languages on and off: Switching into and out of code-blends reveals the nature of bilingual language control. J Exp Psychol Learn Mem Cogn 2020; 46:443-454. [PMID: 31246060 PMCID: PMC6933100 DOI: 10.1037/xlm0000734] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
When spoken language (unimodal) bilinguals switch between languages, they must simultaneously inhibit 1 language and activate the other language. Because American Sign Language (ASL)-English (bimodal) bilinguals can switch into and out of code-blends (simultaneous production of a sign and a word), we can tease apart the cost of inhibition (turning a language off) and activation (turning a language on). Results from a cued picture-naming task with 43 bimodal bilinguals revealed a significant cost to turn off a language (switching out of a code-blend), but no cost to turn on a language (switching into a code-blend). Switching from single to dual lexical retrieval (adding a language) was also not costly. These patterns held for both languages regardless of default language, that is, whether switching between speaking and code-blending (English default) or between signing and code-blending (ASL default). Overall, the results support models of bilingual language control that assume a primary role for inhibitory control and indicate that disengaging from producing a language is more difficult than engaging a new language. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Road, Suite 200, San Diego, CA 92120
| | - Chuchu Li
- Department of Psychiatry, University of California, San Diego, 9500 Gilman Ave., La Jolla, CA 92093-0948
| | - Jennifer Petrich
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Road, Suite 200, San Diego, CA 92120
| | - Tamar H. Gollan
- Department of Psychiatry, University of California, San Diego, 9500 Gilman Ave., La Jolla, CA 92093-0948
| |
Collapse
|
14
|
Midgley KJ, Medina YE, Lee B. Studying bilingual learners and users of spoken and signed languages: A neuro-cognitive approach. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
15
|
Quandt LC, Kubicek E. Sensorimotor characteristics of sign translations modulate EEG when deaf signers read English. BRAIN AND LANGUAGE 2018; 187:9-17. [PMID: 30399489 DOI: 10.1016/j.bandl.2018.10.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 10/19/2018] [Accepted: 10/19/2018] [Indexed: 06/08/2023]
Abstract
Bilingual individuals automatically translate written words from one language to another. While this process is established in spoken-language bilinguals, there is less known about its occurrence in deaf bilinguals who know signed and spoken languages. Since sign language uses motion and space to convey linguistic content, it is possible that action simulation in the brain's sensorimotor system plays a role in this process. We recorded EEG from deaf participants fluent in ASL as they read individual English words and found significant differences in alpha and beta EEG at central electrode sites during the reading of English words whose ASL translations use two hands, compared to English words whose ASL translations use one hand. Hearing non-signers did not show any differences between conditions. These results demonstrate the involvement of the sensorimotor system in cross-linguistic, cross-modal translation, and suggest that covert action simulation processes are involved when deaf signers read.
Collapse
Affiliation(s)
- Lorna C Quandt
- Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA; Department of Psychology, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA.
| | - Emily Kubicek
- Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA
| |
Collapse
|
16
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
17
|
Williams JT, Stone A, Newman SD. Operationalization of Sign Language Phonological Similarity and its Effects on Lexical Access. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2017; 22:303-315. [PMID: 28575411 PMCID: PMC6364953 DOI: 10.1093/deafed/enx014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 03/20/2017] [Accepted: 04/12/2017] [Indexed: 06/07/2023]
Abstract
Cognitive mechanisms for sign language lexical access are fairly unknown. This study investigated whether phonological similarity facilitates lexical retrieval in sign languages using measures from a new lexical database for American Sign Language. Additionally, it aimed to determine which similarity metric best fits the present data in order to inform theories of how phonological similarity is constructed within the lexicon and to aid in the operationalization of phonological similarity in sign language. Sign repetition latencies and accuracy were obtained when native signers were asked to reproduce a sign displayed on a computer screen. Results indicated that, as predicted, phonological similarity facilitated repetition latencies and accuracy as long as there were no strict constraints on the type of sublexical features that overlapped. The data converged to suggest that one similarity measure, MaxD, defined as the overlap of any 4 sublexical features, likely best represents mechanisms of phonological similarity in the mental lexicon. Together, these data suggest that lexical access in sign language is facilitated by phonologically similar lexical representations in memory and the optimal operationalization is defined as liberal constraints on overlap of 4 out of 5 sublexical features-similar to the majority of extant definitions in the literature.
Collapse
|
18
|
Williams JT, Newman SD. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2017; 46:211-225. [PMID: 27112154 DOI: 10.1007/s10936-016-9432-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.
Collapse
Affiliation(s)
- Joshua T Williams
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.
- Program in Cognitive Science, Indiana University, Bloomington, IN, USA.
- Speech and Hearing Sciences, Indiana University, Bloomington, IN, USA.
- Cognitive Neuroimaging Laboratory, Indiana University, 1101 E. 10th Street, Bloomington, IN, 47405, USA.
| | - Sharlene D Newman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
- Program in Cognitive Science, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| |
Collapse
|
19
|
Emmorey K, Giezen MR, Gollan TH. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| | | | - Tamar H Gollan
- University of California San Diego, Department of Psychiatry
| |
Collapse
|
20
|
Giezen MR, Blumenfeld HK, Shook A, Marian V, Emmorey K. Parallel language activation and inhibitory control in bimodal bilinguals. Cognition 2015; 141:9-25. [PMID: 25912892 PMCID: PMC4466161 DOI: 10.1016/j.cognition.2015.04.009] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 04/01/2015] [Accepted: 04/03/2015] [Indexed: 11/30/2022]
Abstract
Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level.
Collapse
Affiliation(s)
- Marcel R Giezen
- San Diego State University, 5250 Campanile Drive, San Diego, CA 92182, USA.
| | - Henrike K Blumenfeld
- School of Speech, Language and Hearing Sciences, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-1518, USA.
| | - Anthony Shook
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-1518, USA.
| |
Collapse
|