1
|
Watkins F, Webb S, Stone C, Thompson RL. Language aptitude in the visuospatial modality: L2 British Sign Language acquisition and cognitive skills in British Sign Language-English interpreting students. Front Psychol 2022; 13:932370. [PMID: 36186342 PMCID: PMC9516300 DOI: 10.3389/fpsyg.2022.932370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 12/04/2022] Open
Abstract
Sign language interpreting (SLI) is a cognitively challenging task performed mostly by second language learners (i.e., not raised using a sign language as a home language). SLI students must first gain language fluency in a new visuospatial modality and then move between spoken and signed modalities as they interpret. As a result, many students plateau before reaching working fluency, and SLI training program drop-out rates are high. However, we know little about the requisite skills to become a successful interpreter: the few existing studies investigating SLI aptitude in terms of linguistic and cognitive skills lack baseline measures. Here we report a 3-year exploratory longitudinal skills assessments study with British Sign Language (BSL)-English SLI students at two universities (n = 33). Our aims were two-fold: first, to better understand the prerequisite skills that lead to successful SLI outcomes; second, to better understand how signing and interpreting skills impact other aspects of cognition. A battery of tasks was completed at four time points to assess skills, including but not limited to: multimodal and unimodal working memory, 2-dimensional and 3-dimensional mental rotation (MR), and English comprehension. Dependent measures were BSL and SLI course grades, BSL reproduction tests, and consecutive SLI tasks. Results reveal that initial BSL proficiency and 2D-MR were associated with selection for the degree program, while visuospatial working memory was linked to continuing with the program. 3D-MR improved throughout the degree, alongside some limited gains in auditory, visuospatial, and multimodal working memory tasks. Visuospatial working memory and MR were the skills closest associated with BSL and SLI outcomes, particularly those tasks involving sign language production, thus, highlighting the importance of cognition related to the visuospatial modality. These preliminary data will inform SLI training programs, from applicant selection to curriculum design.
Collapse
Affiliation(s)
- Freya Watkins
- Multimodal Multilingual Language Processing Lab, School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Stacey Webb
- School of Social Sciences, Languages and Intercultural Studies, Heriot-Watt University, Edinburgh, United Kingdom
| | - Christopher Stone
- School of Social, Historical and Political Studies, University of Wolverhampton, Wolverhampton, United Kingdom
| | - Robin L. Thompson
- Multimodal Multilingual Language Processing Lab, School of Psychology, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
2
|
Gu S, Chen Pichler D, Kozak LV, Lillo-Martin D. Phonological development in American Sign Language-signing children: Insights from pseudosign repetition tasks. Front Psychol 2022; 13:921047. [PMID: 36160535 PMCID: PMC9496651 DOI: 10.3389/fpsyg.2022.921047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
In this study, we conducted a pseudosign (nonce sign) repetition task with 22 children (mean age: 6;04) acquiring American Sign Language (ASL) as a first language (L1) from deaf parents. Thirty-nine pseudosigns with varying complexity were developed and organized into eight categories depending on number of hands, number of simultaneous movement types, and number of movement sequences. Pseudosigns also varied in handshape complexity. The children's performance on the ASL pseudosign task improved with age, displaying relatively accurate (re)production of location and orientation, but much less accurate handshape and movement, a finding in line with real sign productions for both L1 and L2 signers. Handshapes with higher complexity were correlated with lower accuracy in the handshape parameter. We found main effects of sequential and simultaneous movement combinations on overall performance. Items with no movement sequence were produced with higher overall accuracy than those with a movement sequence. Items with two simultaneous movement types or a single movement type were produced with higher overall accuracy than those with three simultaneous movement types. Finally, number of hands did not affect the overall accuracy. Remarkably, movement sequences impose processing constraints on signing children whereas complex hands (two hands) and two simultaneous movement types do not significantly lower accuracy, indicating a capacity for processing multiple simultaneous components in signs. Spoken languages, in contrast, manifest greater complexity in temporal length. Hearing children's pseudoword repetition still displays high levels of accuracy on disyllabic words, with complexity effects affecting only longer multisyllabic words. We conclude that the pseudosign repetition task is an informative tool for studies of signing children's phonological development and that sheds light on potential modality effects for phonological development.
Collapse
Affiliation(s)
- Shengyun Gu
- Department of Linguistics, University of Connecticut, Storrs, CT, United States
| | - Deborah Chen Pichler
- Department of Linguistics, School of Language, Education, and Culture, Gallaudet University, Washington, DC, United States
| | - L. Viola Kozak
- Department of Linguistics, School of Language, Education, and Culture, Gallaudet University, Washington, DC, United States
| | - Diane Lillo-Martin
- Department of Linguistics, University of Connecticut, Storrs, CT, United States
| |
Collapse
|
3
|
Brozdowski C, Emmorey K. Shadowing in the manual modality. Acta Psychol (Amst) 2020; 208:103092. [PMID: 32531500 DOI: 10.1016/j.actpsy.2020.103092] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 03/30/2020] [Accepted: 05/17/2020] [Indexed: 11/30/2022] Open
Abstract
Motor simulation has emerged as a mechanism for both predictive action perception and language comprehension. By deriving a motor command, individuals can predictively represent the outcome of an unfolding action as a forward model. Evidence of simulation can be seen via improved participant performance for stimuli that conform to the participant's individual characteristics (an egocentric bias). There is little evidence, however, from individuals for whom action and language take place in the same modality: sign language users. The present study asked signers and nonsigners to shadow (perform actions in tandem with various models), and the delay between the model and participant ("lag time") served as an indicator of the strength of the predictive model (shorter lag time = more robust model). This design allowed us to examine the role of (a) motor simulation during action prediction, (b) linguistic status in predictive representations (i.e., pseudosigns vs. grooming gestures), and (c) language experience in generating predictions (i.e., signers vs. nonsigners). An egocentric bias was only observed under limited circumstances: when nonsigners began shadowing grooming gestures. The data do not support strong motor simulation proposals, and instead highlight the role of (a) production fluency and (b) manual rhythm for signer productions. Signers showed significantly faster lag times for the highly skilled pseudosign model and increased temporal regularity (i.e., lower standard deviations) compared to nonsigners. We conclude sign language experience may (a) reduce reliance on motor simulation during action observation, (b) attune users to prosodic cues (c) and induce temporal regularities during action production.
Collapse
Affiliation(s)
- Chris Brozdowski
- San Diego State University, United States of America; University of California, San Diego, United States of America.
| | - Karen Emmorey
- San Diego State University, United States of America; University of California, San Diego, United States of America
| |
Collapse
|
4
|
Malaia E, Wilbur RB. Visual and linguistic components of short-term memory: Generalized Neural Model (GNM) for spoken and sign languages. Cortex 2019; 112:69-79. [DOI: 10.1016/j.cortex.2018.05.020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 04/02/2018] [Accepted: 05/29/2018] [Indexed: 10/14/2022]
|
5
|
Rudner M. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application. Front Psychol 2018; 9:679. [PMID: 29867655 PMCID: PMC5962724 DOI: 10.3389/fpsyg.2018.00679] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Accepted: 04/19/2018] [Indexed: 12/02/2022] Open
Abstract
Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
6
|
The relation between working memory and language comprehension in signers and speakers. Acta Psychol (Amst) 2017; 177:69-77. [PMID: 28477456 DOI: 10.1016/j.actpsy.2017.04.014] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 04/25/2017] [Accepted: 04/30/2017] [Indexed: 11/22/2022] Open
Abstract
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.
Collapse
|
7
|
Sehyr ZS, Petrich J, Emmorey K. Fingerspelled and Printed Words Are Recoded into a Speech-based Code in Short-term Memory. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2017; 22:72-87. [PMID: 27789552 DOI: 10.1093/deafed/enw068] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 09/28/2016] [Accepted: 10/03/2016] [Indexed: 06/06/2023]
Abstract
We conducted three immediate serial recall experiments that manipulated type of stimulus presentation (printed or fingerspelled words) and word similarity (speech-based or manual). Matched deaf American Sign Language signers and hearing non-signers participated (mean reading age = 14-15 years). Speech-based similarity effects were found for both stimulus types indicating that deaf signers recoded both printed and fingerspelled words into a speech-based phonological code. A manual similarity effect was not observed for printed words indicating that print was not recoded into fingerspelling (FS). A manual similarity effect was observed for fingerspelled words when similarity was based on joint angles rather than on handshape compactness. However, a follow-up experiment suggested that the manual similarity effect was due to perceptual confusion at encoding. Overall, these findings suggest that FS is strongly linked to English phonology for deaf adult signers who are relatively skilled readers. This link between fingerspelled words and English phonology allows for the use of a more efficient speech-based code for retaining fingerspelled words in short-term memory and may strengthen the representation of English vocabulary.
Collapse
|
8
|
Abstract
Signed and spoken languages emerge, change, are acquired, and are processed under distinct perceptual, motor, and memory constraints. Therefore, the Now-or-Never bottleneck has different ramifications for these languages, which are highlighted in this commentary. The extent to which typological differences in linguistic structure can be traced to processing differences provides unique evidence for the claim that structure is processing.
Collapse
|
9
|
Liu HT, Squires B, Liu CJ. Articulatory Suppression Effects on Short-term Memory of Signed Digits and Lexical Items in Hearing Bimodal-Bilingual Adults. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:362-372. [PMID: 27507848 DOI: 10.1093/deafed/enw048] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 07/04/2016] [Indexed: 06/06/2023]
Abstract
We can gain a better understanding of short-term memory processes by studying different language codes and modalities. Three experiments were conducted to investigate: (a) Taiwanese Sign Language (TSL) digit spans in Chinese/TSL hearing bilinguals (n = 32); (b) American Sign Language (ASL) digit spans in English/ASL hearing bilinguals (n = 15); and (c) TSL lexical sign spans in Chinese/TSL hearing bilinguals (n = 22). Articulatory suppression conditions were manipulated to determine if participants would use a speech- or sign-based code to rehearse lists of signed items. Results from all 3 experiments showed that oral suppression significantly reduced spans while manual suppression had no effect, revealing that participants were using speech-based rehearsal to retain lists of signed items in short-term memory. In addition, sub-vocal rehearsal using Chinese facilitated higher digit spans than English even though stimuli were perceived and recalled using signs. This difference was not found for lexical sign spans.
Collapse
|
10
|
Miozzo M, Petrova A, Fischer-Baum S, Peressotti F. Serial position encoding of signs. Cognition 2016; 154:69-80. [DOI: 10.1016/j.cognition.2016.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2014] [Revised: 04/29/2016] [Accepted: 05/15/2016] [Indexed: 10/21/2022]
|
11
|
Preexisting semantic representation improves working memory performance in the visuospatial domain. Mem Cognit 2016; 44:608-20. [DOI: 10.3758/s13421-016-0585-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
12
|
Abstract
BACKGROUND A large body of research that has investigated substance dependence and working memory (WM) resources, yet no prior study has used a comprehensive test battery to examine the impact of chronic drug dependence on WM as a multi-component system. OBJECTIVES This study examined the efficiency of several WM components in participants who were chronic drug dependents. In addition, the functioning of the four WM components was compared among dependents of various types of drugs. METHOD In total, 128 chronic drug dependents participated in this study. Their average age was 38.48 years, and they were classified into four drug-dependence groups. Chronic drug dependents were compared with a 36-participant control group that had a mean age of 37.6 years. A WM test battery that comprised eight tests and that assessed each of four WM components was administered to each participant. RESULTS Compared with the control group, all four groups of drug dependents had significantly poorer test performance on all of the WM tasks. Among the four groups of drug users, the polydrug group had the poorest performance scores on each of the eight tasks, and the performance scores of the marijuana group were the least affected. Finally, the forward digit span task and the logical memory tasks were less sensitive than other tasks when differentiating between marijuana users and the normal participants. CONCLUSION The four components of WM are impaired among chronic drug dependents. These results have implications for the development of tools, classification methods and therapeutic strategies for drug dependents.
Collapse
|
13
|
Wang J, Napier J. Signed language working memory capacity of signed language interpreters and deaf signers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2013; 18:271-286. [PMID: 23303377 DOI: 10.1093/deafed/ens068] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an Auslan working memory (WM) span task. The results revealed that the hearing signers (i.e., the professional interpreters) significantly outperformed the deaf signers on the Auslan WM span task. However, the results showed no significant differences between the native signers and the nonnative signers in their Auslan working memory capacity. Furthermore, there was no significant interaction between hearing status and age of signed language acquisition. Additionally, the study found no significant differences between the deaf native signers (adults) and the deaf nonnative signers (adults) in their Auslan working memory capacity. The findings are discussed in relation to the participants' memory strategies and their early language experience. The findings present challenges for WM theories.
Collapse
Affiliation(s)
- Jihong Wang
- Department of Linguistics, Macquarie University, Sydney NSW 2109, Australia.
| | | |
Collapse
|
14
|
García-Orza J, Carratalá P. Sign recall by hearing signers: Evidences of dual coding. JOURNAL OF COGNITIVE PSYCHOLOGY 2012. [DOI: 10.1080/20445911.2012.682054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
15
|
Hirshorn EA, Fernandez NM, Bavelier D. Routes to short-term memory indexing: lessons from deaf native users of American Sign Language. Cogn Neuropsychol 2012; 29:85-103. [PMID: 22871205 DOI: 10.1080/02643294.2012.704354] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se.
Collapse
|
16
|
Beal-Alvarez JS, Lederberg AR, Easterbrooks SR. Grapheme-phoneme acquisition of deaf preschoolers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2011; 17:39-60. [PMID: 21724967 DOI: 10.1093/deafed/enr030] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We examined acquisition of grapheme-phoneme correspondences by 4 deaf and hard-of-hearing preschoolers using instruction from a curriculum designed specifically for this population supplemented by Visual Phonics. Learning was documented through a multiple baseline across content design as well as descriptive analyses. Preschoolers who used sign language and had average to low-average receptive vocabulary skills and varied speech perception skills acquired all correspondences after instruction. They were also able to use that knowledge while reading words. On a posttest, the children were able to decode graphemes into corresponding phonemes and identified about half of the words that were included during instruction. However, they did not identify any novel words. Descriptive analyses suggest that the children used Visual Phonics as an effective mnemonic device to recall correspondences and that deaf and hard-of-hearing preschoolers, even those with no speech perception abilities, benefited from explicit instruction in the grapheme-phoneme relationship using multimodality support.
Collapse
|