1
|
McGarry ME, Midgley KJ, Holcomb PJ, Emmorey K. An ERP investigation of perceptual vs motoric iconicity in sign production. Neuropsychologia 2024; 203:108966. [PMID: 39098388 PMCID: PMC11462866 DOI: 10.1016/j.neuropsychologia.2024.108966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 07/06/2024] [Accepted: 08/01/2024] [Indexed: 08/06/2024]
Abstract
The type of form-meaning mapping for iconic signs can vary. For perceptually-iconic signs there is a correspondence between visual features of a referent (e.g., the beak of a bird) and the form of the sign (e.g., extended thumb and index finger at the mouth for the American Sign Language (ASL) sign BIRD). For motorically-iconic signs there is a correspondence between how an object is held/manipulated and the form of the sign (e.g., the ASL sign FLUTE depicts how a flute is played). Previous studies have found that iconic signs are retrieved faster in picture-naming tasks, but type of iconicity has not been manipulated. We conducted an ERP study in which deaf signers and a control group of English speakers named pictures that targeted perceptually-iconic, motorically-iconic, or non-iconic ASL signs. For signers (unlike the control group), naming latencies varied by iconicity type: perceptually-iconic < motorically-iconic < non-iconic signs. A reduction in the N400 amplitude was only found for the perceptually-iconic signs, compared to both non-iconic and motorically-iconic signs. No modulations of N400 amplitudes were observed for the control group. We suggest that this pattern of results arises because pictures eliciting perceptually-iconic signs can more effectively prime lexical access due to greater alignment between features of the picture and the semantic and phonological features of the sign. We speculate that naming latencies are facilitated for motorically-iconic signs due to later processes (e.g., faster phonological encoding via cascading activation from semantic features). Overall, the results indicate that type of iconicity plays role in sign production when elicited by picture-naming tasks.
Collapse
Affiliation(s)
- Meghan E McGarry
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA.
| |
Collapse
|
2
|
Gimeno-Martínez M, Baus C. Characterizing language production across modalities. Cogn Neuropsychol 2024; 41:1-17. [PMID: 38377394 DOI: 10.1080/02643294.2024.2315823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 12/02/2023] [Indexed: 02/22/2024]
Abstract
ABSTRACTThis study investigates factors influencing lexical access in language production across modalities (signed and oral). Data from deaf and hearing signers were reanalyzed (Baus and Costa, 2015, On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Research, 1609(1), 40-53. https://doi.org/10.1016/j.brainres.2015.03.013; Gimeno-Martínez and Baus, 2022, Iconicity in sign language production: Task matters. Neuropsychologia, 167, 108166. https://doi.org/10.1016/j.neuropsychologia.2022.108166) testing the influence of psycholinguistic variables and ERP mean amplitudes on signing and naming latencies. Deaf signers' signing latencies were influenced by sign iconicity in the picture signing task, and by spoken psycholinguistic variables in the word-to-sign translation task. Additionally, ERP amplitudes before response influenced signing but not translation latencies. Hearing signers' latencies, both signing and naming, were influenced by sign iconicity and word frequency, with early ERP amplitudes predicting only naming latencies. These findings highlight general and modality-specific determinants of lexical access in language production.
Collapse
Affiliation(s)
- Marc Gimeno-Martínez
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Cristina Baus
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| |
Collapse
|
3
|
Bylund E, Antfolk J, Abrahamsson N, Olstad AMH, Norrman G, Lehtonen M. Does bilingualism come with linguistic costs? A meta-analytic review of the bilingual lexical deficit. Psychon Bull Rev 2023; 30:897-913. [PMID: 36327027 PMCID: PMC10264296 DOI: 10.3758/s13423-022-02136-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2022] [Indexed: 11/06/2022]
Abstract
A series of recent studies have shown that the once-assumed cognitive advantage of bilingualism finds little support in the evidence available to date. Surprisingly, however, the view that bilingualism incurs linguistic costs (the so-called lexical deficit) has not yet been subjected to the same degree of scrutiny, despite its centrality for our understanding of the human capacity for language. The current study implemented a comprehensive meta-analysis to address this gap. By analyzing 478 effect sizes from 130 studies on expressive vocabulary, we found that observed lexical deficits could not be attributed to bilingualism: Simultaneous bilinguals (who acquired both languages from birth) did not exhibit any lexical deficit, nor did sequential bilinguals (who acquired one language from birth and a second language after that) when tested in their mother tongue. Instead, systematic evidence for a lexical deficit was found among sequential bilinguals when tested in their second language, and more so for late than for early second language learners. This result suggests that a lexical deficit may be a phenomenon of second language acquisition rather than bilingualism per se.
Collapse
Affiliation(s)
- Emanuel Bylund
- Department of General Linguistics, Stellenbosch University, Stellenbosch, South Africa.
- Stockholm University, Stockholm, Sweden.
| | | | | | | | | | - Minna Lehtonen
- University of Oslo, Oslo, Norway
- University of Turku, Turku, Finland
| |
Collapse
|
4
|
Current exposure to a second language modulates bilingual visual word recognition: An EEG study. Neuropsychologia 2022; 164:108109. [PMID: 34875300 DOI: 10.1016/j.neuropsychologia.2021.108109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 12/01/2021] [Accepted: 12/03/2021] [Indexed: 02/07/2023]
Abstract
Bilingual word recognition has been the focus of much empirical work, but research on potential modulating factors, such as individual differences in L2 exposure, are limited. This study represents a first attempt to determine the impact of L2-exposure on bilingual word recognition in both languages. To this end, highly fluent bilinguals were split into two groups according to their L2-exposure, and performed a semantic categorisation task while recording their behavioural responses and electro-cortical (EEG) signal. We predicted that lower L2-exposure should produce less efficient L2 word recognition processing at the behavioural level, alongside neurophysiological changes at the early pre-lexical and lexical levels, but not at a post-lexical level. Results confirmed this hypothesis in accuracy and in the N1 component of the EEG signal. Precisely, bilinguals with lower L2-exposure appeared less accurate in determining semantic relatedness when target words were presented in L2, but this condition posed no such problem for bilinguals with higher L2-exposure. Moreover, L2-exposure modulates early processes of word recognition not only in L2 but also in L1 brain activity, thus challenging a fully non-selective access account (cf. BIA + model, Dijkstra and van Heuven, 2002). We interpret our findings with reference to the frequency-lag hypothesis (Gollan et al., 2011).
Collapse
|
5
|
Gimeno-Martínez M, Baus C. Iconicity in sign language production: Task matters. Neuropsychologia 2022; 167:108166. [DOI: 10.1016/j.neuropsychologia.2022.108166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 12/21/2021] [Accepted: 01/23/2022] [Indexed: 10/19/2022]
|
6
|
The effects of multiple linguistic variables on picture naming in American Sign Language. Behav Res Methods 2021; 54:2502-2521. [PMID: 34918219 DOI: 10.3758/s13428-021-01751-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/14/2021] [Indexed: 11/08/2022]
Abstract
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.
Collapse
|
7
|
Trettenbrein PC, Pendzich NK, Cramer JM, Steinbach M, Zaccarella E. Psycholinguistic norms for more than 300 lexical signs in German Sign Language (DGS). Behav Res Methods 2021; 53:1817-1832. [PMID: 33575986 PMCID: PMC8516755 DOI: 10.3758/s13428-020-01524-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 02/06/2023]
Abstract
Sign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign's correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: https://doi.org/10.17605/OSF.IO/MZ8J4.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences, Stephanstraße 1a, Leipzig, 04103, Germany.
- International Max Planck Research School on Neuroscience of Communication: Structure, Function, & Plasticity (IMPRS NeuroCom), Stephanstraße 1a, Leipzig, 04103, Germany.
| | - Nina-Kristin Pendzich
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Jens-Michael Cramer
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Markus Steinbach
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences, Stephanstraße 1a, Leipzig, 04103, Germany
| |
Collapse
|
8
|
Riès SK, Nadalet L, Mickelsen S, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Pre-output Language Monitoring in Sign Production. J Cogn Neurosci 2020; 32:1079-1091. [PMID: 32027582 PMCID: PMC7234262 DOI: 10.1162/jocn_a_01542] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture-word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Karen Emmorey
- San Diego State University
- University of California, San Diego
| |
Collapse
|
9
|
Emmorey K, Winsler K, Midgley KJ, Grainger J, Holcomb PJ. Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:249-267. [PMID: 33043298 PMCID: PMC7544239 DOI: 10.1162/nol_a_00012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 04/16/2020] [Indexed: 05/21/2023]
Abstract
To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer's hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000-1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words.
Collapse
Affiliation(s)
| | - Kurt Winsler
- Department of Psychology, University of California, Davis
| | | | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique
| | | |
Collapse
|
10
|
Lu A, Wang L, Guo Y, Zeng J, Zheng D, Wang X, Shao Y, Wang R. The Roles of Relative Linguistic Proficiency and Modality Switching in Language Switch Cost: Evidence from Chinese Visual Unimodal and Bimodal Bilinguals. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2019; 48:1-18. [PMID: 28865039 DOI: 10.1007/s10936-017-9519-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The current study investigated the mechanism of language switching in unbalanced visual unimodal bilinguals as well as balanced and unbalanced bimodal bilinguals during a picture naming task. All three groups exhibited significant switch costs across two languages, with symmetrical switch cost in balanced bimodal bilinguals and asymmetrical switch cost in unbalanced unimodal bilinguals and bimodal bilinguals. Moreover, the relative proficiency of the two languages but not their absolute proficiency had an effect on language switch cost. For the bimodal bilinguals the language switch cost also arose from modality switching. These findings suggest that the language switch cost might originate from multiple sources from both outside (e.g., modality switching) and inside (e.g., the relative proficiency of the two languages) the linguistic lexicon.
Collapse
Affiliation(s)
- Aitao Lu
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China.
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China.
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China.
| | - Lu Wang
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Yuyang Guo
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Jiahong Zeng
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Dongping Zheng
- Department of Second Language Studies, University of Hawaii, Honolulu, HI, USA
| | - Xiaolu Wang
- School of International Studies and Center for the Study of Language and Cognition, Zhejiang University, Zhejiang, China.
- School of Foreign Language Studies, Ningbo Institute of Technology, Zhejiang University, Zhejiang, China.
- School of Humanities and Communication Arts, Western Sydney University, Sydney, NSW, Australia.
| | - Yulan Shao
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Ruiming Wang
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| |
Collapse
|
11
|
Sehyr ZS, Giezen MR, Emmorey K. Comparing Semantic Fluency in American Sign Language and English. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2018; 23:399-407. [PMID: 29733368 PMCID: PMC6146786 DOI: 10.1093/deafed/eny013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 04/09/2018] [Accepted: 04/11/2018] [Indexed: 05/31/2023]
Abstract
This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in 1 min) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.
Collapse
Affiliation(s)
- Zed Sevcikova Sehyr
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | - Marcel R Giezen
- Basque Center on Cognition, Brain and Language, San Sebastian, Donostia, Spain
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
12
|
Abstract
ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .
Collapse
|
13
|
Wu YJ, Thierry G. Brain potentials predict language selection before speech onset in bilinguals. BRAIN AND LANGUAGE 2017; 171:23-30. [PMID: 28445784 DOI: 10.1016/j.bandl.2017.04.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 04/03/2017] [Accepted: 04/04/2017] [Indexed: 06/07/2023]
Abstract
Studies of language production in bilinguals have seldom considered the fact that language selection likely involves proactive control. Here, we show that Chinese-English bilinguals actively inhibit the language not-to-be used before the onset of a picture to be named. Depending on the nature of a directive cue, participants named a subsequent picture in their native language, in their second language, or remained silent. The cue elicited a contingent negative variation of event-related brain potentials, greater in amplitude when the cue announced a naming trial as compared to when it announced a silent trial. In addition, the negativity was greater in amplitude when the picture was to be named in English than in Chinese, suggesting that preparation for speech in the second language requires more inhibition than preparation for speech in the native language. This result is the first direct neurophysiological evidence consistent with proactive inhibitory control in bilingual production.
Collapse
Affiliation(s)
- Yan Jing Wu
- College of Psychology and Sociology, Shenzhen University, 518060, China; Department of Psychology, The University of Sheffield, S10 2TP, UK
| | - Guillaume Thierry
- School of Psychology, Bangor University, LL57 2AS, UK; Centre for Research on Bilingualism, Bangor University, LL57 2AS, UK.
| |
Collapse
|
14
|
Giezen MR, Emmorey K. Evidence for a bimodal bilingual disadvantage in letter fluency. BILINGUALISM (CAMBRIDGE, ENGLAND) 2017; 20:42-48. [PMID: 28785168 PMCID: PMC5544419 DOI: 10.1017/s1366728916000596] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL-English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are expressed with distinct articulators.
Collapse
Affiliation(s)
- Marcel R Giezen
- BCBL. Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
15
|
Gutierrez-Sigut E, Payne H, MacSweeney M. Examining the contribution of motor movement and language dominance to increased left lateralization during sign generation in native signers. BRAIN AND LANGUAGE 2016; 159:109-17. [PMID: 27388786 PMCID: PMC4980063 DOI: 10.1016/j.bandl.2016.06.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Revised: 05/19/2016] [Accepted: 06/18/2016] [Indexed: 06/06/2023]
Abstract
The neural systems supporting speech and sign processing are very similar, although not identical. In a previous fTCD study of hearing native signers (Gutierrez-Sigut, Daws, et al., 2015) we found stronger left lateralization for sign than speech. Given that this increased lateralization could not be explained by hand movement alone, the contribution of motor movement versus 'linguistic' processes to the strength of hemispheric lateralization during sign production remains unclear. Here we directly contrast lateralization strength of covert versus overt signing during phonological and semantic fluency tasks. To address the possibility that hearing native signers' elevated lateralization indices (LIs) were due to performing a task in their less dominant language, here we test deaf native signers, whose dominant language is British Sign Language (BSL). Signers were more strongly left lateralized for overt than covert sign generation. However, the strength of lateralization was not correlated with the amount of time producing movements of the right hand. Comparisons with previous data from hearing native English speakers suggest stronger laterality indices for sign than speech in both covert and overt tasks. This increased left lateralization may be driven by specific properties of sign production such as the increased use of self-monitoring mechanisms or the nature of phonological encoding of signs.
Collapse
Affiliation(s)
- Eva Gutierrez-Sigut
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Departamento de Metodología de las Ciencias del Comportamiento, Universitat de València, Spain.
| | - Heather Payne
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Institute of Cognitive Neuroscience, University College London, United Kingdom.
| | - Mairéad MacSweeney
- Deafness, Cognition & Language Research Centre, University College London, United Kingdom; Institute of Cognitive Neuroscience, University College London, United Kingdom.
| |
Collapse
|
16
|
Emmorey K, Giezen MR, Gollan TH. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| | | | - Tamar H Gollan
- University of California San Diego, Department of Psychiatry
| |
Collapse
|
17
|
Hauser PC, Paludneviciene R, Riddle W, Kurz KB, Emmorey K, Contreras J. American Sign Language Comprehension Test: A Tool for Sign Language Researchers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:64-69. [PMID: 26590608 DOI: 10.1093/deafed/env051] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2015] [Accepted: 10/17/2015] [Indexed: 06/05/2023]
Abstract
The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf non-native signers, and hearing ASL students. The results revealed that the ASL-CT has good internal reliability (α = 0.834). Discriminant validity was established by demonstrating that deaf native signers performed significantly better than deaf non-native signers and hearing native signers. Concurrent validity was established by demonstrating that test results positively correlated with another measure of ASL ability (r = .715) and that hearing ASL students' performance positively correlated with the level of ASL courses they were taking (r = .726). Researchers can use the ASL-CT to characterize an individual's ASL comprehension skills, to establish a minimal skill level as an inclusion criterion for a study, to group study participants by ASL skill (e.g., proficient vs. nonproficient), or to provide a measure of ASL skill as a dependent variable.
Collapse
Affiliation(s)
- Peter C Hauser
- National Technical Institute for the Deaf, Rochester Institute of Technology; NSF Science of Learning Center on Visual Language and Visual Learning;
| | - Raylene Paludneviciene
- NSF Science of Learning Center on Visual Language and Visual Learning; Gallaudet University; and
| | | | - Kim B Kurz
- National Technical Institute for the Deaf, Rochester Institute of Technology
| | - Karen Emmorey
- NSF Science of Learning Center on Visual Language and Visual Learning; San Diego State University
| | - Jessica Contreras
- National Technical Institute for the Deaf, Rochester Institute of Technology; NSF Science of Learning Center on Visual Language and Visual Learning
| |
Collapse
|
18
|
Baus C, Costa A. On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Res 2015; 1609:40-53. [PMID: 25801115 DOI: 10.1016/j.brainres.2015.03.013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2014] [Revised: 03/05/2015] [Accepted: 03/08/2015] [Indexed: 11/30/2022]
Abstract
This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated.
Collapse
Affiliation(s)
- Cristina Baus
- Center of Brain and Cognition, CBC, Universitat Pompeu Fabra, Barcelona, Spain; Laboratoire de Psychologie Cognitive, CNRS and Université d'Aix-Marseille, Marseille, France.
| | - Albert Costa
- Center of Brain and Cognition, CBC, Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain.
| |
Collapse
|
19
|
Gibson TA, Peña ED, Bedore LM. The receptive-expressive gap in bilingual children with and without primary language impairment. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2014; 23:655-67. [PMID: 25029625 PMCID: PMC6380504 DOI: 10.1044/2014_ajslp-12-0119] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2012] [Accepted: 06/18/2014] [Indexed: 06/03/2023]
Abstract
PURPOSE In this study, the authors examined the magnitude of the discrepancy between standardized measures of receptive and expressive semantic knowledge, known as a receptive-expressive gap, for bilingual children with and without primary language impairment (PLI). METHOD Spanish and English measures of semantic knowledge were administered to 37 Spanish-English bilingual 7- to 10-year old children with PLI and to 37 Spanish-English bilingual peers with typical development (TD). Parents and teachers completed questionnaires that yielded day-by-day and hour-by-hour information regarding children's exposure to and use of Spanish and English. RESULTS Children with PLI had significantly larger discrepancies between receptive and expressive semantics standard scores than their bilingual peers with TD. The receptive-expressive gap for children with PLI was predicted by current English experience, whereas the best predictor for children with TD was cumulative English experience. CONCLUSIONS As a preliminary explanation, underspecified phonological representations due to bilingual children's divided language input as well as differences in their languages' phonological systems may result in a discrepancy between standardized measures of receptive and expressive semantic knowledge. This discrepancy is greater for bilingual children with PLI because of the additional difficulty these children have in processing phonetic information. Future research is required to understand these underlying processes.
Collapse
|
20
|
Lillo-Martin D, de Quadros RM, Chen Pichler D, Fieldsteel Z. Language choice in bimodal bilingual development. Front Psychol 2014; 5:1163. [PMID: 25368591 PMCID: PMC4202712 DOI: 10.3389/fpsyg.2014.01163] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2014] [Accepted: 09/24/2014] [Indexed: 12/03/2022] Open
Abstract
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending-expressions in both speech and sign simultaneously-an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult. Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant community language.
Collapse
Affiliation(s)
- Diane Lillo-Martin
- Department of Linguistics, University of ConnecticutStorrs, CT, USA
- Haskins LaboratoriesNew Haven, CT, USA
| | - Ronice M. de Quadros
- Departamento de Libras, Universidade Federal de Santa CatarinaFlorianópolis, Brazil
| | | | - Zoe Fieldsteel
- Department of Linguistics, Brown UniversityProvidence, RI, USA
| |
Collapse
|
21
|
Allen JS, Emmorey K, Bruss J, Damasio H. Neuroanatomical differences in visual, motor, and language cortices between congenitally deaf signers, hearing signers, and hearing non-signers. Front Neuroanat 2013; 7:26. [PMID: 23935567 PMCID: PMC3731534 DOI: 10.3389/fnana.2013.00026] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2013] [Accepted: 07/19/2013] [Indexed: 11/13/2022] Open
Abstract
WE INVESTIGATED EFFECTS OF SIGN LANGUAGE USE AND AUDITORY DEPRIVATION FROM BIRTH ON THE VOLUMES OF THREE CORTICAL REGIONS OF THE HUMAN BRAIN: the visual cortex surrounding the calcarine sulcus in the occipital lobe; the language-related cortex in the inferior frontal gyrus (pars triangularis and pars opercularis); and the motor hand region in the precentral gyrus. The study included 25 congenitally deaf participants and 41 hearing participants (of which 16 were native sign language users); all were right-handed. Deaf participants exhibited a larger calcarine volume than hearing participants, which we interpret as the likely result of cross-modal compensation and/or dynamic interactions within sensory neural networks. Deaf participants also had increased volumes of the pars triangularis bilaterally compared to hearing signers and non-signers, which we interpret is related to the increased linguistic demands of speech processing and/or text reading for deaf individuals. Finally, although no statistically significant differences were found in the motor hand region for any of the groups, the deaf group was leftward asymmetric, the hearing signers essentially symmetric and the hearing non-signers were rightward asymmetric - results we interpret as the possible result of activity-dependent change due to life-long signing. The brain differences we observed in visual, motor, and language-related areas in adult deaf native signers provide evidence for the plasticity available for cognitive adaptation to varied environments during development.
Collapse
Affiliation(s)
- John S Allen
- Dornsife Cognitive Neuroscience Imaging Center, University of Southern California Los Angeles, CA, USA ; Brain and Creativity Institute, University of Southern California Los Angeles, CA, USA
| | | | | | | |
Collapse
|