1
|
Quartarone C, Navarrete E, Budisavljević S, Peressotti F. Exploring the ventral white matter language network in bimodal and unimodal bilinguals. BRAIN AND LANGUAGE 2022; 235:105187. [PMID: 36244164 DOI: 10.1016/j.bandl.2022.105187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 09/25/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
We used diffusion magnetic resonance imaging tractography to investigate the effect of language modality on the anatomy of the ventral white matter language network by comparing unimodal (Italian/English) and bimodal bilinguals (Italian/Italian Sign Language). We extracted the diffusion tractography measures of the Inferior Longitudinal fasciculus (ILF), Uncinate fasciculus (UF) and Inferior Fronto-Occipital fasciculus (IFOF) and we correlated them with the degree of bilingualism and the individual performance in fluency tasks. For both groups of bilinguals, the microstructural properties of the right ILF were correlated with individual level of proficiency in L2, confirming the involvement of this tract in bilingualism. In addition, we found that the degree of left lateralization of the ILF predicted the performance in semantic fluency in L1. The microstructural properties of the right UF correlated with performance in phonological fluency in L1, only for bimodal bilinguals. Overall, the pattern shows both similarities and differences between the two groups of bilinguals.
Collapse
Affiliation(s)
- Cinzia Quartarone
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione - University of Padua, Via Venezia, 8, 35137 Padova, Italy
| | - Eduardo Navarrete
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione - University of Padua, Via Venezia, 8, 35137 Padova, Italy
| | - Sanja Budisavljević
- School of Medicine, St. Andrews University, College Gate, St Andrews KY16, 9AJ, UK
| | - Francesca Peressotti
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione - University of Padua, Via Venezia, 8, 35137 Padova, Italy.
| |
Collapse
|
2
|
Xu L, Gong T, Shuai L, Feng J. Significantly different noun-verb distinguishing mechanisms in written Chinese and Chinese sign language: An event-related potential study of bilingual native signers. Front Neurosci 2022; 16:910263. [DOI: 10.3389/fnins.2022.910263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 10/05/2022] [Indexed: 11/13/2022] Open
Abstract
Little is known about: (a) whether bilingual signers possess dissociated neural mechanisms for noun and verb processing in written language (just like native non-signers), or they utilize similar neural mechanisms for those processing (due to general lack of part-of-speech criterion in sign languages); and (b) whether learning a language from another modality (L2) influences corresponding neural mechanism of L1. In order to address these issues, we conducted an electroencephalogram (EEG) based reading comprehension study on bimodal bilinguals, namely Chinese native deaf signers, whose L1 is Chinese Sign Language and L2 is written Chinese. Analyses identified significantly dissociated neural mechanisms in the bilingual signers’ written noun and verb processing (which also became more explicit along with increase in their written Chinese understanding levels), but not in their understanding of verbal and nominal meanings in Chinese Sign Language. These findings reveal relevance between modality-based linguistic features and processing mechanisms, which suggests that: processing modality-based features of a language is unlikely affected by learning another language in a different modality; and cross-modal language transfer is subject to modal constraints rather than explicit linguistic features.
Collapse
|
3
|
Schönström K, Holmström I. L2M1 and L2M2 Acquisition of Sign Lexicon: The Impact of Multimodality on the Sign Second Language Acquisition. Front Psychol 2022; 13:896254. [PMID: 35756281 PMCID: PMC9231460 DOI: 10.3389/fpsyg.2022.896254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/02/2022] [Indexed: 11/13/2022] Open
Abstract
In second language research, the concept of cross-linguistic influence or transfer has frequently been used to describe the interaction between the first language (L1) and second language (L2) in the L2 acquisition process. However, less is known about the L2 acquisition of a sign language in general and specifically the differences in the acquisition process of L2M2 learners (learners learning a sign language for the first time) and L2M1 learners (signers learning another sign language) from a multimodal perspective. Our study explores the influence of modality knowledge on learning Swedish Sign Language through a descriptive analysis of the sign lexicon in narratives produced by L2M1 and L2M2 learners, respectively. A descriptive mixed-methods framework was used to analyze narratives of adult L2M1 (n = 9) and L2M2 learners (n = 15), with a focus on sign lexicon, i.e., use and distribution of the sign types such as lexical signs, depicting signs (classifier predicates), fingerspelling, pointing, and gestures. The number and distribution of the signs are later compared between the groups. In addition, a comparison with a control group consisting of L1 signers (n = 9) is provided. The results suggest that L2M2 learners exhibit cross-modal cross-linguistic transfer from Swedish (through higher usage of lexical signs and fingerspelling). L2M1 learners exhibits same-modal cross-linguistic transfer from L1 sign languages (through higher usage of depicting signs and use of signs from L1 sign language and international signs). The study suggests that it is harder for L2M2 learners to acquire the modality-specific lexicon, despite possible underlying gestural knowledge. Furthermore, the study suggests that L2M1 learners’ access to modality-specific knowledge, overlapping access to gestural knowledge and iconicity, facilitates faster L2 lexical acquisition, which is discussed from the perspective of linguistic relativity (including modality) and its role in sign L2 acquisition.
Collapse
Affiliation(s)
| | - Ingela Holmström
- Department of Linguistics, Stockholm University, Stockholm, Sweden
| |
Collapse
|
4
|
Blanco-Elorrieta E, Caramazza A. A common selection mechanism at each linguistic level in bilingual and monolingual language production. Cognition 2021; 213:104625. [DOI: 10.1016/j.cognition.2021.104625] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 01/08/2021] [Accepted: 02/05/2021] [Indexed: 11/29/2022]
|
5
|
Manhardt F, Brouwer S, Özyürek A. A Tale of Two Modalities: Sign and Speech Influence Each Other in Bimodal Bilinguals. Psychol Sci 2021; 32:424-436. [PMID: 33621474 DOI: 10.1177/0956797620968789] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals' expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals' speech and signs are shaped by two languages from different modalities.
Collapse
Affiliation(s)
| | | | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Donders Center for Cognition, Radboud University
| |
Collapse
|
6
|
Weisberg J, Casey S, Sehyr ZS, Emmorey K. Second language acquisition of American Sign Language influences co-speech gesture production. BILINGUALISM (CAMBRIDGE, ENGLAND) 2020; 23:473-482. [PMID: 32733161 PMCID: PMC7392225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Previous work indicates that 1) adults with native sign language experience produce more manual co-speech gestures than monolingual non-signers, and 2) one year of ASL instruction increases gesture production in adults, but not enough to differentiate them from non-signers. To elucidate these effects, we asked early ASL-English bilinguals, fluent late second language (L2) signers (≥ 10 years of experience signing), and monolingual non-signers to retell a story depicted in cartoon clips to a monolingual partner. Early and L2 signers produced manual gestures at higher rates compared to non-signers, particularly iconic gestures, and used a greater variety of handshapes. These results indicate susceptibility of the co-speech gesture system to modification by extensive sign language experience, regardless of the age of acquisition. L2 signers produced more ASL signs and more handshape varieties than early signers, suggesting less separation between the ASL lexicon and the co-speech gesture system for L2 signers.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University 6495 Alvarado Rd., Suite 200, San Diego, CA 92120 USA
| | - Shannon Casey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University 6495 Alvarado Rd., Suite 200, San Diego, CA 92120 USA
| | - Zed Sevcikova Sehyr
- Laboratory for Language and Cognitive Neuroscience, San Diego State University 6495 Alvarado Rd., Suite 200, San Diego, CA 92120 USA
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University 6495 Alvarado Rd., Suite 200, San Diego, CA 92120 USA
| |
Collapse
|
7
|
Emmorey K, Li C, Petrich J, Gollan TH. Turning languages on and off: Switching into and out of code-blends reveals the nature of bilingual language control. J Exp Psychol Learn Mem Cogn 2020; 46:443-454. [PMID: 31246060 PMCID: PMC6933100 DOI: 10.1037/xlm0000734] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
When spoken language (unimodal) bilinguals switch between languages, they must simultaneously inhibit 1 language and activate the other language. Because American Sign Language (ASL)-English (bimodal) bilinguals can switch into and out of code-blends (simultaneous production of a sign and a word), we can tease apart the cost of inhibition (turning a language off) and activation (turning a language on). Results from a cued picture-naming task with 43 bimodal bilinguals revealed a significant cost to turn off a language (switching out of a code-blend), but no cost to turn on a language (switching into a code-blend). Switching from single to dual lexical retrieval (adding a language) was also not costly. These patterns held for both languages regardless of default language, that is, whether switching between speaking and code-blending (English default) or between signing and code-blending (ASL default). Overall, the results support models of bilingual language control that assume a primary role for inhibitory control and indicate that disengaging from producing a language is more difficult than engaging a new language. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Road, Suite 200, San Diego, CA 92120
| | - Chuchu Li
- Department of Psychiatry, University of California, San Diego, 9500 Gilman Ave., La Jolla, CA 92093-0948
| | - Jennifer Petrich
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Road, Suite 200, San Diego, CA 92120
| | - Tamar H. Gollan
- Department of Psychiatry, University of California, San Diego, 9500 Gilman Ave., La Jolla, CA 92093-0948
| |
Collapse
|
8
|
Lu A, Wang L, Guo Y, Zeng J, Zheng D, Wang X, Shao Y, Wang R. The Roles of Relative Linguistic Proficiency and Modality Switching in Language Switch Cost: Evidence from Chinese Visual Unimodal and Bimodal Bilinguals. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2019; 48:1-18. [PMID: 28865039 DOI: 10.1007/s10936-017-9519-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The current study investigated the mechanism of language switching in unbalanced visual unimodal bilinguals as well as balanced and unbalanced bimodal bilinguals during a picture naming task. All three groups exhibited significant switch costs across two languages, with symmetrical switch cost in balanced bimodal bilinguals and asymmetrical switch cost in unbalanced unimodal bilinguals and bimodal bilinguals. Moreover, the relative proficiency of the two languages but not their absolute proficiency had an effect on language switch cost. For the bimodal bilinguals the language switch cost also arose from modality switching. These findings suggest that the language switch cost might originate from multiple sources from both outside (e.g., modality switching) and inside (e.g., the relative proficiency of the two languages) the linguistic lexicon.
Collapse
Affiliation(s)
- Aitao Lu
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China.
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China.
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China.
| | - Lu Wang
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Yuyang Guo
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Jiahong Zeng
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Dongping Zheng
- Department of Second Language Studies, University of Hawaii, Honolulu, HI, USA
| | - Xiaolu Wang
- School of International Studies and Center for the Study of Language and Cognition, Zhejiang University, Zhejiang, China.
- School of Foreign Language Studies, Ningbo Institute of Technology, Zhejiang University, Zhejiang, China.
- School of Humanities and Communication Arts, Western Sydney University, Sydney, NSW, Australia.
| | - Yulan Shao
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Ruiming Wang
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| |
Collapse
|
9
|
Understanding gesture in sign and speech: Perspectives from theory of mind, bilingualism, and acting. Behav Brain Sci 2017; 40:e61. [DOI: 10.1017/s0140525x15002964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractIn their article, Goldin-Meadow & Brentari (G-M&B) assert that researchers must differentiate between sign/speech and gesture. We propose that this distinction may be useful if situated within a two-systems approach to theory of mind (ToM) and discuss how drawing upon perspectives from bilingualism and acting can help us understand the role of gesture in spoken/sign language.
Collapse
|
10
|
Weisberg J, Hubbard AL, Emmorey K. Multimodal integration of spontaneously produced representational co-speech gestures: an fMRI study. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 32:158-174. [PMID: 29130054 PMCID: PMC5675577 DOI: 10.1080/23273798.2016.1245426] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Accepted: 09/05/2016] [Indexed: 05/31/2023]
Abstract
To examine whether more ecologically valid co-speech gesture stimuli elicit brain responses consistent with those found by studies that relied on scripted stimuli, we presented participants with spontaneously produced, meaningful co-speech gesture during fMRI scanning (n = 28). Speech presented with gesture (versus either presented alone) elicited heightened activity in bilateral posterior superior temporal, premotor, and inferior frontal regions. Within left temporal and premotor, but not inferior frontal regions, we identified small clusters with superadditive responses, suggesting that these discrete regions support both sensory and semantic integration. In contrast, surrounding areas and the inferior frontal gyrus may support either sensory or semantic integration. Reduced activation for speech with gesture in language-related regions indicates allocation of fewer neural resources when meaningful gestures accompany speech. Sign language experience did not affect co-speech gesture activation. Overall, our results indicate that scripted stimuli have minimal confounding influences; however, they may miss subtle superadditive effects.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Amy Lynn Hubbard
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| |
Collapse
|
11
|
Emmorey K, Giezen MR, Gollan TH. Insights from bimodal bilingualism: Reply to commentaries. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:261-263. [PMID: 28781571 PMCID: PMC5544127 DOI: 10.1017/s136672891500070x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The commentaries on our Keynote article “Psycholinguistic, cognitive, and neural implications of bimodal bilingualism” were enthusiastic about what can be learned by studying bilinguals who acquire two languages that are understood via distinct perceptual systems (vision vs. audition) and that are produced with distinct linguistic articulators (the hands vs. the vocal tract). The authors also brought out several new ideas, extensions, and issues related to bimodal bilingualism, which we discuss in this reply.
Collapse
|
12
|
Emmorey K, Giezen MR, Gollan TH. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2016; 19:223-242. [PMID: 28804269 PMCID: PMC5553278 DOI: 10.1017/s1366728915000085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| | | | - Tamar H Gollan
- University of California San Diego, Department of Psychiatry
| |
Collapse
|
13
|
Giezen MR, Blumenfeld HK, Shook A, Marian V, Emmorey K. Parallel language activation and inhibitory control in bimodal bilinguals. Cognition 2015; 141:9-25. [PMID: 25912892 PMCID: PMC4466161 DOI: 10.1016/j.cognition.2015.04.009] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 04/01/2015] [Accepted: 04/03/2015] [Indexed: 11/30/2022]
Abstract
Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level.
Collapse
Affiliation(s)
- Marcel R Giezen
- San Diego State University, 5250 Campanile Drive, San Diego, CA 92182, USA.
| | - Henrike K Blumenfeld
- School of Speech, Language and Hearing Sciences, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-1518, USA.
| | - Anthony Shook
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-1518, USA.
| |
Collapse
|
14
|
Lu A, Yu Y, Niu J, Zhang JX. The effect of sign language structure on complex word reading in Chinese deaf adolescents. PLoS One 2015; 10:e0120943. [PMID: 25799066 PMCID: PMC4370692 DOI: 10.1371/journal.pone.0120943] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Accepted: 02/09/2015] [Indexed: 12/03/2022] Open
Abstract
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Collapse
Affiliation(s)
- Aitao Lu
- Center for Studies of Psychological Application & School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Yanping Yu
- Center for Studies of Psychological Application & School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - Jiaxin Niu
- Center for Studies of Psychological Application & School of Psychology, South China Normal University, Guangzhou, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, Guangzhou, China
- Guangdong Center of Mental Assistance and Contingency Technique for Emergency, Guangzhou, China
| | - John X. Zhang
- Department of Psychology, Fudan University, Shanghai, China
| |
Collapse
|
15
|
Quinto-Pozos D, Parrill F. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives. Top Cogn Sci 2014; 7:12-35. [PMID: 25348839 DOI: 10.1111/tops.12120] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Revised: 03/04/2014] [Accepted: 05/02/2014] [Indexed: 11/29/2022]
Abstract
Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively.
Collapse
|
16
|
Comparing the Products and the Processes of Creating Sign Language Poetry and Pantomimic Improvisations. JOURNAL OF NONVERBAL BEHAVIOR 2013. [DOI: 10.1007/s10919-013-0160-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
17
|
Casey S, Emmorey K, Larrabee H. The effects of learning American Sign Language on co-speech gesture(). BILINGUALISM (CAMBRIDGE, ENGLAND) 2012; 15:677-686. [PMID: 23335853 PMCID: PMC3547625 DOI: 10.1017/s1366728911000575] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
Collapse
|
18
|
Cormier K, Quinto-Pozos D, Sevcikova Z, Schembri A. Lexicalisation and de-lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. LANGUAGE & COMMUNICATION 2012; 32:329-348. [PMID: 23805017 PMCID: PMC3688355 DOI: 10.1016/j.langcom.2012.09.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we compare so-called "classifier" constructions in signed languages (which we refer to as "depicting constructions") with comparable iconic gestures produced by non-signers. We show clear correspondences between entity constructions and observer viewpoint gestures on the one hand, and handling constructions and character viewpoint gestures on the other. Such correspondences help account for both lexicalisation and de-lexicalisation processes in signed languages and how these processes are influenced by viewpoint. Understanding these processes is crucial when coding and annotating natural sign language data.
Collapse
Affiliation(s)
- Kearsy Cormier
- Deafness, Cognition & Language Research Centre, University College London, UK
| | | | - Zed Sevcikova
- Deafness, Cognition & Language Research Centre, University College London, UK
| | | |
Collapse
|
19
|
Shook A, Marian V. Bimodal bilinguals co-activate both languages during spoken comprehension. Cognition 2012; 124:314-24. [PMID: 22770677 DOI: 10.1016/j.cognition.2012.05.014] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/26/2012] [Accepted: 05/18/2012] [Indexed: 10/28/2022]
Abstract
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals' and English monolinguals' eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing item than at phonologically unrelated items and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension.
Collapse
Affiliation(s)
- Anthony Shook
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | | |
Collapse
|
20
|
When deaf signers read English: do written words activate their sign translations? Cognition 2010; 118:286-92. [PMID: 21145047 DOI: 10.1016/j.cognition.2010.11.006] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2010] [Revised: 10/18/2010] [Accepted: 11/05/2010] [Indexed: 11/22/2022]
Abstract
Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.
Collapse
|
21
|
Abstract
Anthropological studies of sensory impairment address biological conditions and cultural disablement while contributing to theoretical discussions of cultural competence, communicative practices, the role of narrative, and features of identity, ideologies, and technology. As boundary cases, impairments can disclose essential aspects of the senses in human life. Sensory impairment studies navigate the complexities of comparing dominant sensory discourses with individual sense differences, cross-linguistic incomparabilities among sense categories, and how impairment categories tend to fuse together highly diverse conditions. The category of disability, which includes sensory impairment, comprises chronic deficit relative to priority competencies. With special emphasis on blindness/visual impairment and deafness/hearing impairment, we overview sensory impairment on three levels: the social partitioning of the sensorium, differential ramifications of sensory impairments cross-culturally, and the classification of the person based on cultural priorities. We identify ten common themes in ethnographically oriented studies.
Collapse
Affiliation(s)
- Elizabeth Keating
- Department of Anthropology, University of Texas, Austin, Texas 78712
| | - R. Neill Hadder
- Department of Anthropology, Texas State University, San Marcos, Texas 78666
| |
Collapse
|
22
|
Pyers JE, Gollan TH, Emmorey K. Bimodal bilinguals reveal the source of tip-of-the-tongue states. Cognition 2009; 112:323-9. [PMID: 19477437 DOI: 10.1016/j.cognition.2009.04.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2008] [Revised: 04/24/2009] [Accepted: 04/27/2009] [Indexed: 11/29/2022]
Abstract
Bilinguals report more tip-of-the-tongue (TOT) failures than monolinguals. Three accounts of this disadvantage are that bilinguals experience between-language interference at (a) semantic and/or (b) phonological levels, or (c) that bilinguals use each language less frequently than monolinguals. Bilinguals who speak one language and sign another help decide between these alternatives because their languages lack phonological overlap. Twenty-two American Sign Language (ASL)-English bilinguals, 22 English monolinguals, and 11 Spanish-English bilinguals named 52 pictures in English. Despite no phonological overlap between languages, ASL-English bilinguals had more TOTs than monolinguals, and equivalent TOTs as Spanish-English bilinguals. These data eliminate phonological blocking as the exclusive source of bilingual disadvantages. A small advantage of ASL-English over Spanish-English bilinguals in correct retrievals is consistent with semantic interference and a minor role for phonological blocking. However, this account faces substantial challenges. We argue reduced frequency of use is the more comprehensive explanation of TOT rates in all bilinguals.
Collapse
Affiliation(s)
- Jennie E Pyers
- Wellesley College, Psychology, 106 Central Street, SCI 480, Wellesley, MA 02481, USA.
| | | | | |
Collapse
|
23
|
Emmorey K, Luk G, Pyers JE, Bialystok E. The source of enhanced cognitive control in bilinguals: evidence from bimodal bilinguals. Psychol Sci 2009; 19:1201-6. [PMID: 19121123 DOI: 10.1111/j.1467-9280.2008.02224.x] [Citation(s) in RCA: 122] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection. Bimodal bilinguals can, but do not always, sign and speak at the same time. Their two languages involve distinct motor and perceptual systems, leading to weaker demands on language control. We compared the performance of 15 monolinguals, 15 bimodal bilinguals, and 15 unimodal bilinguals on a set of flanker tasks. There were no group differences in accuracy, but unimodal bilinguals were faster than the other groups; bimodal bilinguals did not differ from monolinguals. These results trace the bilingual advantage in cognitive control to the unimodal bilingual's experience controlling two languages in the same modality.
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA.
| | | | | | | |
Collapse
|
24
|
Emmorey K, Borinstein HB, Thompson R, Gollan TH. Bimodal bilingualism. BILINGUALISM (CAMBRIDGE, ENGLAND) 2008; 11:43-61. [PMID: 19079743 PMCID: PMC2600850 DOI: 10.1017/s1366728907003203] [Citation(s) in RCA: 91] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Speech-sign or "bimodal" bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal-manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
Collapse
|