1
|
Sehyr ZS, Emmorey K. Contribution of Lexical Quality and Sign Language Variables to Reading Comprehension. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 27:355-372. [PMID: 35775152 DOI: 10.1093/deafed/enac018] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/17/2022] [Accepted: 05/21/2022] [Indexed: 06/15/2023]
Abstract
The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.
Collapse
Affiliation(s)
- Zed Sevcikova Sehyr
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, CA, USA
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, CA, USA
| |
Collapse
|
2
|
Cross-modal and cross-language activation in bilinguals reveals lexical competition even when words or signs are unheard or unseen. Proc Natl Acad Sci U S A 2022; 119:e2203906119. [PMID: 36037359 PMCID: PMC9457174 DOI: 10.1073/pnas.2203906119] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish-Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)-even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.
Collapse
|
3
|
Hänel-Faulhaber B, Groen MA, Röder B, Friedrich CK. Ongoing Sign Processing Facilitates Written Word Recognition in Deaf Native Signing Children. Front Psychol 2022; 13:917700. [PMID: 35992405 PMCID: PMC9390089 DOI: 10.3389/fpsyg.2022.917700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 06/24/2022] [Indexed: 11/13/2022] Open
Abstract
Signed and written languages are intimately related in proficient signing readers. Here, we tested whether deaf native signing beginning readers are able to make rapid use of ongoing sign language to facilitate recognition of written words. Deaf native signing children (mean 10 years, 7 months) received prime target pairs with sign word onsets as primes and written words as targets. In a control group of hearing children (matched in their reading abilities to the deaf children, mean 8 years, 8 months), spoken word onsets were instead used as primes. Targets (written German words) either were completions of the German signs or of the spoken word onsets. Task of the participants was to decide whether the target word was a possible German word. Sign onsets facilitated processing of written targets in deaf children similarly to spoken word onsets facilitating processing of written targets in hearing children. In both groups, priming elicited similar effects in the simultaneously recorded event related potentials (ERPs), starting as early as 200 ms after the onset of the written target. These results suggest that beginning readers can use ongoing lexical processing in their native language - be it signed or spoken - to facilitate written word recognition. We conclude that intimate interactions between sign and written language might in turn facilitate reading acquisition in deaf beginning readers.
Collapse
Affiliation(s)
| | | | - Brigitte Röder
- Biological Psychology and Neuropsychology, Universität Hamburg, Hamburg, Germany
| | - Claudia K. Friedrich
- Department of Developmental Psychology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
4
|
Frederiksen AT. Emerging ASL Distinctions in Sign-Speech Bilinguals' Signs and Co-speech Gestures in Placement Descriptions. Front Psychol 2021; 12:686485. [PMID: 34413812 PMCID: PMC8369348 DOI: 10.3389/fpsyg.2021.686485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/23/2021] [Indexed: 11/18/2022] Open
Abstract
Previous work on placement expressions (e.g., “she put the cup on the table”) has demonstrated cross-linguistic differences in the specificity of placement expressions in the native language (L1), with some languages preferring more general, widely applicable expressions and others preferring more specific expressions based on more fine-grained distinctions. Research on second language (L2) acquisition of an additional spoken language has shown that learning the appropriate L2 placement distinctions poses a challenge for adult learners whose L2 semantic representations can be non-target like and have fuzzy boundaries. Unknown is whether similar effects apply to learners acquiring a L2 in a different sensory-motor modality, e.g., hearing learners of a sign language. Placement verbs in signed languages tend to be highly iconic and to exhibit transparent semantic boundaries. This may facilitate acquisition of signed placement verbs. In addition, little is known about how exposure to different semantic boundaries in placement events in a typologically different language affects lexical semantic meaning in the L1. In this study, we examined placement event descriptions (in American Sign Language (ASL) and English) in hearing L2 learners of ASL who were native speakers of English. L2 signers' ASL placement descriptions looked similar to those of two Deaf, native ASL signer controls, suggesting that the iconicity and transparency of placement distinctions in the visual modality may facilitate L2 acquisition. Nevertheless, L2 signers used a wider range of handshapes in ASL and used them less appropriately, indicating that fuzzy semantic boundaries occur in cross-modal L2 acquisition as well. In addition, while the L2 signers' English verbal expressions were not different from those of a non-signing control group, placement distinctions expressed in co-speech gesture were marginally more ASL-like for L2 signers, suggesting that exposure to different semantic boundaries can cause changes to how placement is conceptualized in the L1 as well.
Collapse
Affiliation(s)
- Anne Therese Frederiksen
- Department of Linguistics University of California, San Diego, La Jolla, CA, United States.,Department of Language Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
5
|
Bosworth RG, Binder EM, Tyler SC, Morford JP. Automaticity of lexical access in deaf and hearing bilinguals: Cross-linguistic evidence from the color Stroop task across five languages. Cognition 2021; 212:104659. [PMID: 33798950 DOI: 10.1016/j.cognition.2021.104659] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 12/08/2020] [Accepted: 03/07/2021] [Indexed: 11/15/2022]
Abstract
The well-known Stroop interference effect has been instrumental in revealing the highly automated nature of lexical processing as well as providing new insights to the underlying lexical organization of first and second languages within proficient bilinguals. The present cross-linguistic study had two goals: 1) to examine Stroop interference for dynamic signs and printed words in deaf ASL-English bilinguals who report no reliance on speech or audiological aids; 2) to compare Stroop interference effects in several groups of bilinguals whose two languages range from very distinct to very similar in their shared orthographic patterns: ASL-English bilinguals (very distinct), Chinese-English bilinguals (low similarity), Korean-English bilinguals (moderate similarity), and Spanish-English bilinguals (high similarity). Reaction time and accuracy were measured for the Stroop color naming and word reading tasks, for congruent and incongruent color font conditions. Results confirmed strong Stroop interference for both dynamic ASL stimuli and English printed words in deaf bilinguals, with stronger Stroop interference effects in ASL for deaf bilinguals who scored higher in a direct assessment of ASL proficiency. Comparison of the four groups of bilinguals revealed that the same-script bilinguals (Spanish-English bilinguals) exhibited significantly greater Stroop interference effects for color naming than the other three bilingual groups. The results support three conclusions. First, Stroop interference effects are found for both signed and spoken languages. Second, contrary to some claims in the literature about deaf signers who do not use speech being poor readers, deaf bilinguals' lexical processing of both signs and written words is highly automated. Third, cross-language similarity is a critical factor shaping bilinguals' experience of Stroop interference in their two languages. This study represents the first comparison of both deaf and hearing bilinguals on the Stroop task, offering a critical test of theories about bilingual lexical access and cognitive control.
Collapse
Affiliation(s)
- Rain G Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, USA.
| | | | - Sarah C Tyler
- Department of Psychology, University of California, San Diego, USA
| | - Jill P Morford
- Department of Linguistics, University of New Mexico, USA
| |
Collapse
|
6
|
Language development in deaf bilinguals: Deaf middle school students co-activate written English and American Sign Language during lexical processing. Cognition 2021; 211:104642. [PMID: 33752155 DOI: 10.1016/j.cognition.2021.104642] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 02/07/2021] [Accepted: 02/19/2021] [Indexed: 11/20/2022]
Abstract
Bilinguals, both hearing and deaf, activate multiple languages simultaneously even in contexts that require only one language. To date, the point in development at which bilingual signers experience cross-language activation of a signed and a spoken language remains unknown. We investigated the processing of written words by ASL-English bilingual deaf middle school students. Deaf bilinguals were faster to respond to English word pairs with phonologically related translations in ASL than to English word pairs with unrelated translations, but no difference was found for hearing controls with no knowledge of ASL. The results indicate that co-activation of signs and written words is not the outcome of years of bilingual experience, but instead characterizes bilingual language development.
Collapse
|
7
|
Tatariw C, Mortazavi B, Ledford TC, Starr SF, Smyth E, Griffin Wood A, Simpson LT, Cherry JA. Nitrate reduction capacity is limited by belowground plant recovery in a 32‐year‐old created salt marsh. Restor Ecol 2020. [DOI: 10.1111/rec.13300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Corianne Tatariw
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
| | - Behzad Mortazavi
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
- Alabama Water Institute The University of Alabama Tuscaloosa AL 35487 U.S.A
- Center for Complex Hydrosystems Research The University of Alabama Tuscaloosa AL 35487 U.S.A
| | - Taylor C. Ledford
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
| | - Sommer F. Starr
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
| | - Erin Smyth
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
| | - Abigail Griffin Wood
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
| | - Loraé T. Simpson
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
| | - Julia A. Cherry
- Department of Biological Sciences The University of Alabama 1325 Science and Engineering Complex (SEC), 300 Hackberry Lane Tuscaloosa AL 35487 U.S.A
- New College, The University of Alabama 201 Lloyd Hall, 503 6th Avenue Tuscaloosa AL 35487 U.S.A
| |
Collapse
|
8
|
Schotter ER, Johnson E, Lieberman AM. The sign superiority effect: Lexical status facilitates peripheral handshape identification for deaf signers. J Exp Psychol Hum Percept Perform 2020; 46:1397-1410. [PMID: 32940493 PMCID: PMC7887614 DOI: 10.1037/xhp0000862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2024]
Abstract
Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Emily Johnson
- Department of Psychology, University of South Florida
| | - Amy M Lieberman
- Wheelock College of Education and Human Development, Boston University
| |
Collapse
|
9
|
Lexical processing in child and adult classroom second language learners: Uniqueness and similarities, and implications for cognitive models. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Schaeffner S, Philipp AM. Modality-specific effects in bilingual language perception. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2019.1698584] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Simone Schaeffner
- Institute of Psychology, RWTH Aachen University, Aachen, Germany
- Department of Psychology, University of Koblenz-Landau, Landau, Germany
| | | |
Collapse
|
11
|
Lederberg AR, Branum-Martin L, Webb MY, Schick B, Antia S, Easterbrooks SR, Connor CM. Modality and Interrelations Among Language, Reading, Spoken Phonological Awareness, and Fingerspelling. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:408-423. [PMID: 31089729 DOI: 10.1093/deafed/enz011] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/27/2019] [Accepted: 03/16/2019] [Indexed: 06/09/2023]
Abstract
Better understanding of the mechanisms underlying early reading skills can lead to improved interventions. Hence, the purpose of this study was to examine multivariate associations among reading, language, spoken phonological awareness, and fingerspelling abilities for three groups of deaf and hard-of-hearing (DHH) beginning readers: those who were acquiring only spoken English (n = 101), those who were visual learners and acquiring sign (n = 131), and those who were acquiring both (n = 104). Children were enrolled in kindergarten, first, or second grade. Within-group and between-group confirmatory factor analysis showed that there were both similarities and differences in the abilities that underlie reading in these three groups. For all groups, reading abilities related to both language and the ability to manipulate the sublexical features of words. However, the groups differed on whether these constructs were based on visual or spoken language. Our results suggest that there are alternative means to learning to read. Whereas all DHH children learning to read rely on the same fundamental abilities of language and phonological processing, the modality, levels, and relations among these abilities differ.
Collapse
|
12
|
Morford JP, Occhino C, Zirnstein M, Kroll JF, Wilkinson E, Piñar P. What is the Source of Bilingual Cross-Language Activation in Deaf Bilinguals? JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:356-365. [PMID: 31398721 DOI: 10.1093/deafed/enz024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 04/16/2019] [Accepted: 05/12/2019] [Indexed: 06/10/2023]
Abstract
When deaf bilinguals are asked to make semantic similarity judgments of two written words, their responses are influenced by the sublexical relationship of the signed language translations of the target words. This study investigated whether the observed effects of American Sign Language (ASL) activation on English print depend on (a) an overlap in syllabic structure of the signed translations or (b) on initialization, an effect of contact between ASL and English that has resulted in a direct representation of English orthographic features in ASL sublexical form. Results demonstrate that neither of these conditions is required or enhances effects of cross-language activation. The experimental outcomes indicate that deaf bilinguals discover the optimal mapping between their two languages in a manner that is not constrained by privileged sublexical associations.
Collapse
|
13
|
Ma F, Ai H. Chinese Learners of English See Chinese Words When Reading English Words. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:505-521. [PMID: 29159763 DOI: 10.1007/s10936-017-9533-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The present study examines when second language (L2) learners read words in the L2, whether the orthography and/or phonology of the translation words in the first language (L1) is activated and whether the patterns would be modulated by the proficiency in the L2. In two experiments, two groups of Chinese learners of English immersed in the L1 environment, one less proficient and the other more proficient in English, performed a translation recognition task. In this task, participants judged whether pairs of words, with an L2 word preceding an L1 word, were translation words or not. The critical conditions compared the performance of learners to reject distractors that were related to the translation word (e.g., , pronounced as /bei 1/) of an L2 word (e.g., cup) in orthography (e.g., , bad in Chinese, pronounced as /huai 4/) or phonology (e.g., , sad in Chinese, pronounced as /bei 1/). Results of Experiment 1 showed less proficient learners were slower and less accurate to reject translation orthography distractors, as compared to unrelated controls, demonstrating a robust translation orthography interference effect. In contrast, their performance was not significantly different when rejecting translation phonology distractors, relative to unrelated controls, showing no translation phonology interference. The same patterns were observed in more proficient learners in Experiment 2. Together, these results suggest that when Chinese learners of English read English words, the orthographic information, but not the phonological information of the Chinese translation words is activated. In addition, this activation is not modulated by L2 proficiency.
Collapse
Affiliation(s)
- Fengyang Ma
- School of Education, University of Cincinnati, 610N Teachers College, 2610 McMicken Circle, Cincinnati, OH, 45221, USA.
| | - Haiyang Ai
- School of Education, University of Cincinnati, 615P Teachers College, 2610 McMicken Circle, Cincinnati, OH, 45221, USA
| |
Collapse
|
14
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
15
|
Is susceptibility to cross-language interference domain specific? Cognition 2017; 165:10-25. [DOI: 10.1016/j.cognition.2017.04.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 04/06/2017] [Accepted: 04/20/2017] [Indexed: 11/18/2022]
|
16
|
Meade G, Midgley KJ, Sevcikova Sehyr Z, Holcomb PJ, Emmorey K. Implicit co-activation of American Sign Language in deaf readers: An ERP study. BRAIN AND LANGUAGE 2017; 170:50-61. [PMID: 28407510 PMCID: PMC5538318 DOI: 10.1016/j.bandl.2017.03.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 01/21/2017] [Accepted: 03/16/2017] [Indexed: 05/12/2023]
Abstract
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, USA.
| | | | - Zed Sevcikova Sehyr
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| | | | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| |
Collapse
|
17
|
Morford JP, Occhino-Kehoe C, Piñar P, Wilkinson E, Kroll JF. The time course of cross-language activation in deaf ASL-English bilinguals. BILINGUALISM (CAMBRIDGE, ENGLAND) 2017; 20:337-350. [PMID: 31320833 PMCID: PMC6639091 DOI: 10.1017/s136672891500067x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
What is the time course of cross-language activation in deaf sign-print bilinguals? Prior studies demonstrating cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf ASL-English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further, the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second language. The results allow us to rule out the possibility that deaf ASL-English bilinguals only activate ASL phonological forms when given ample time for strategic or conscious translation across their two languages.
Collapse
Affiliation(s)
- Jill P. Morford
- NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
- Department of Linguistics, University of New Mexico,
USA
| | - Corrine Occhino-Kehoe
- NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
- Department of Linguistics, University of New Mexico,
USA
| | - Pilar Piñar
- NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
- Department of Foreign Languages, Gallaudet University,
USA
| | - Erin Wilkinson
- NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
- Department of Linguistics, University of Manitoba,
Canada
| | - Judith F. Kroll
- NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
- Department of Psychology, Pennsylvania State University,
USA
| |
Collapse
|
18
|
Abstract
The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.
Collapse
|
19
|
Petitto LA, Langdon C, Stone A, Andriola D, Kartheiser G, Cochran C. Visual sign phonology: insights into human reading and language from a natural soundless phonology. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 7:366-381. [PMID: 27425650 DOI: 10.1002/wcs.1404] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2015] [Revised: 05/24/2016] [Accepted: 05/29/2016] [Indexed: 11/09/2022]
Abstract
Among the most prevailing assumptions in science and society about the human reading process is that sound and sound-based phonology are critical to young readers. The child's sound-to-letter decoding is viewed as universal and vital to deriving meaning from print. We offer a different view. The crucial link for early reading success is not between segmental sounds and print. Instead the human brain's capacity to segment, categorize, and discern linguistic patterning makes possible the capacity to segment all languages. This biological process includes the segmentation of languages on the hands in signed languages. Exposure to natural sign language in early life equally affords the child's discovery of silent segmental units in visual sign phonology (VSP) that can also facilitate segmental decoding of print. We consider powerful biological evidence about the brain, how it builds sound and sign phonology, and why sound and sign phonology are equally important in language learning and reading. We offer a testable theoretical account, reading model, and predictions about how VSP can facilitate segmentation and mapping between print and meaning. We explain how VSP can be a powerful facilitator of all children's reading success (deaf and hearing)-an account with profound transformative impact on learning to read in deaf children with different language backgrounds. The existence of VSP has important implications for understanding core properties of all human language and reading, challenges assumptions about language and reading as being tied to sound, and provides novel insight into a remarkable biological equivalence in signed and spoken languages. WIREs Cogn Sci 2016, 7:366-381. doi: 10.1002/wcs.1404 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- L A Petitto
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA. .,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA. .,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA. .,Department of Psychology, Gallaudet University, Washington, DC, USA.
| | - C Langdon
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - A Stone
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - D Andriola
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - G Kartheiser
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - C Cochran
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Department of Linguistics, Gallaudet University, Washington, DC, USA
| |
Collapse
|
20
|
Giezen MR, Emmorey K. Language co-activation and lexical selection in bimodal bilinguals: Evidence from picture-word interference. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:264-276. [PMID: 26989347 PMCID: PMC4790112 DOI: 10.1017/s1366728915000097] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
We used picture-word interference (PWI) to discover a) whether cross-language activation at the lexical level can yield phonological priming effects when languages do not share phonological representations, and b) whether semantic interference effects occur without articulatory competition. Bimodal bilinguals fluent in American Sign Language (ASL) and English named pictures in ASL while listening to distractor words that were 1) translation equivalents, 2) phonologically related to the target sign through translation, 3) semantically related, or 4) unrelated. Monolingual speakers named pictures in English. Production of ASL signs was facilitated by words that were phonologically related through translation and by translation equivalents, indicating that cross-language activation spreads from lexical to phonological levels for production. Semantic interference effects were not observed for bimodal bilinguals, providing some support for a post-lexical locus of semantic interference, but which we suggest may instead reflect time course differences in spoken and signed production in the PWI task.
Collapse
Affiliation(s)
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
21
|
Emmorey K, Giezen MR, Gollan TH. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| | | | - Tamar H Gollan
- University of California San Diego, Department of Psychiatry
| |
Collapse
|
22
|
Holmer E, Heimann M, Rudner M. Evidence of an association between sign language phonological awareness and word reading in deaf and hard-of-hearing children. RESEARCH IN DEVELOPMENTAL DISABILITIES 2016; 48:145-159. [PMID: 26561215 DOI: 10.1016/j.ridd.2015.10.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Revised: 09/09/2015] [Accepted: 10/14/2015] [Indexed: 06/05/2023]
Abstract
BACKGROUND AND AIMS Children with good phonological awareness (PA) are often good word readers. Here, we asked whether Swedish deaf and hard-of-hearing (DHH) children who are more aware of the phonology of Swedish Sign Language, a language with no orthography, are better at reading words in Swedish. METHODS AND PROCEDURES We developed the Cross-modal Phonological Awareness Test (C-PhAT) that can be used to assess PA in both Swedish Sign Language (C-PhAT-SSL) and Swedish (C-PhAT-Swed), and investigated how C-PhAT performance was related to word reading as well as linguistic and cognitive skills. We validated C-PhAT-Swed and administered C-PhAT-Swed and C-PhAT-SSL to DHH children who attended Swedish deaf schools with a bilingual curriculum and were at an early stage of reading. OUTCOMES AND RESULTS C-PhAT-SSL correlated significantly with word reading for DHH children. They performed poorly on C-PhAT-Swed and their scores did not correlate significantly either with C-PhAT-SSL or word reading, although they did correlate significantly with cognitive measures. CONCLUSIONS AND IMPLICATIONS These results provide preliminary evidence that DHH children with good sign language PA are better at reading words and show that measures of spoken language PA in DHH children may be confounded by individual differences in cognitive skills.
Collapse
Affiliation(s)
- Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | - Mikael Heimann
- Swedish Institute for Disability Research and Division of Psychology, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
23
|
Navarrete E, Caccaro A, Pavani F, Mahon BZ, Peressotti F. With or without semantic mediation: retrieval of lexical representations in sign production. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2015; 20:163-71. [PMID: 25583708 PMCID: PMC4810805 DOI: 10.1093/deafed/enu045] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Revised: 11/18/2014] [Accepted: 11/18/2014] [Indexed: 05/29/2023]
Abstract
How are lexical representations retrieved during sign production? Similar to spoken languages, lexical representation in sign language must be accessed through semantics when naming pictures. However, it remains an open issue whether lexical representations in sign language can be accessed via routes that bypass semantics when retrieval is elicited by written words. Here we address this issue by exploring under which circumstances sign retrieval is sensitive to semantic context. To this end we replicate in sign language production the cumulative semantic cost: The observation that naming latencies increase monotonically with each additional within-category item that is named in a sequence of pictures. In the experiment reported here, deaf participants signed sequences of pictures or signed sequences of Italian written words using Italian Sign Language. The results showed a cumulative semantic cost in picture naming but, strikingly, not in word naming. This suggests that only picture naming required access to semantics, whereas deaf signers accessed the sign language lexicon directly (i.e., bypassing semantics) when naming written words. The implications of these findings for the architecture of the sign production system are discussed in the context of current models of lexical access in spoken language production.
Collapse
Affiliation(s)
| | | | | | - Bradford Z Mahon
- University of Rochester, and University of Rochester Medical Center
| | | |
Collapse
|