1
|
Lu A, Liu S, Zhang J, Zhang M, Song T, Wang L, Wang X. The Effect of Phonetic Similarity on Domain-General Executive Control in Color-Shape Task: Evidence from Cantonese-Mandarin and Beijing-Dialect-Mandarin Bidialectals. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:1855-1874. [PMID: 37326763 DOI: 10.1007/s10936-023-09958-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/04/2023] [Indexed: 06/17/2023]
Abstract
The present study was carried out to investigate whether bidialectals have a similar advantage in domain-general executive function as bilinguals and if so whether the phonetic similarity between two different dialects can modulate the executive function performance in the conflicting-switching task. The results showed that the latencies for switching trials in mixed block (SMs) were longest, non-switching trials in mixed block (NMs) were medium, and non-switching trials in pure block (NPs) were the shortest in the conflict-switching task in all three groups of participants. Importantly, the difference between NPs and NMs varied as a function of phonetic similarity between two dialects with Cantonese-Mandarin bidialectal speakers being the minimum, Beijing-dialect-Mandarin bidialectals medium, and Mandarin native speakers maximum. These results provide strong evidence that there is an advantage in balanced bidialectals's executive function which is modulated by the phonetic similarity between two dialects suggesting that phonetic similarity plays an important role in domain-general executive function.
Collapse
Affiliation(s)
- Aitao Lu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China.
| | - Siyi Liu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
| | - Jijia Zhang
- The Department of Psychology & the Laboratory of the Department of Psychology, Renmin University of China, Beijing, 100872, China.
| | - Meifang Zhang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
| | - Tianhua Song
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
| | - Lu Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
| | - Xuebin Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, China
| |
Collapse
|
2
|
Watkins F, Webb S, Stone C, Thompson RL. Language aptitude in the visuospatial modality: L2 British Sign Language acquisition and cognitive skills in British Sign Language-English interpreting students. Front Psychol 2022; 13:932370. [PMID: 36186342 PMCID: PMC9516300 DOI: 10.3389/fpsyg.2022.932370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 12/04/2022] Open
Abstract
Sign language interpreting (SLI) is a cognitively challenging task performed mostly by second language learners (i.e., not raised using a sign language as a home language). SLI students must first gain language fluency in a new visuospatial modality and then move between spoken and signed modalities as they interpret. As a result, many students plateau before reaching working fluency, and SLI training program drop-out rates are high. However, we know little about the requisite skills to become a successful interpreter: the few existing studies investigating SLI aptitude in terms of linguistic and cognitive skills lack baseline measures. Here we report a 3-year exploratory longitudinal skills assessments study with British Sign Language (BSL)-English SLI students at two universities (n = 33). Our aims were two-fold: first, to better understand the prerequisite skills that lead to successful SLI outcomes; second, to better understand how signing and interpreting skills impact other aspects of cognition. A battery of tasks was completed at four time points to assess skills, including but not limited to: multimodal and unimodal working memory, 2-dimensional and 3-dimensional mental rotation (MR), and English comprehension. Dependent measures were BSL and SLI course grades, BSL reproduction tests, and consecutive SLI tasks. Results reveal that initial BSL proficiency and 2D-MR were associated with selection for the degree program, while visuospatial working memory was linked to continuing with the program. 3D-MR improved throughout the degree, alongside some limited gains in auditory, visuospatial, and multimodal working memory tasks. Visuospatial working memory and MR were the skills closest associated with BSL and SLI outcomes, particularly those tasks involving sign language production, thus, highlighting the importance of cognition related to the visuospatial modality. These preliminary data will inform SLI training programs, from applicant selection to curriculum design.
Collapse
Affiliation(s)
- Freya Watkins
- Multimodal Multilingual Language Processing Lab, School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Stacey Webb
- School of Social Sciences, Languages and Intercultural Studies, Heriot-Watt University, Edinburgh, United Kingdom
| | - Christopher Stone
- School of Social, Historical and Political Studies, University of Wolverhampton, Wolverhampton, United Kingdom
| | - Robin L. Thompson
- Multimodal Multilingual Language Processing Lab, School of Psychology, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
3
|
The relation between working memory and language comprehension in signers and speakers. Acta Psychol (Amst) 2017; 177:69-77. [PMID: 28477456 DOI: 10.1016/j.actpsy.2017.04.014] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 04/25/2017] [Accepted: 04/30/2017] [Indexed: 11/22/2022] Open
Abstract
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.
Collapse
|
4
|
Liu HT, Squires B, Liu CJ. Articulatory Suppression Effects on Short-term Memory of Signed Digits and Lexical Items in Hearing Bimodal-Bilingual Adults. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:362-372. [PMID: 27507848 DOI: 10.1093/deafed/enw048] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 07/04/2016] [Indexed: 06/06/2023]
Abstract
We can gain a better understanding of short-term memory processes by studying different language codes and modalities. Three experiments were conducted to investigate: (a) Taiwanese Sign Language (TSL) digit spans in Chinese/TSL hearing bilinguals (n = 32); (b) American Sign Language (ASL) digit spans in English/ASL hearing bilinguals (n = 15); and (c) TSL lexical sign spans in Chinese/TSL hearing bilinguals (n = 22). Articulatory suppression conditions were manipulated to determine if participants would use a speech- or sign-based code to rehearse lists of signed items. Results from all 3 experiments showed that oral suppression significantly reduced spans while manual suppression had no effect, revealing that participants were using speech-based rehearsal to retain lists of signed items in short-term memory. In addition, sub-vocal rehearsal using Chinese facilitated higher digit spans than English even though stimuli were perceived and recalled using signs. This difference was not found for lexical sign spans.
Collapse
|
5
|
Marschark M, Sarchet T, Trani A. Effects of Hearing Status and Sign Language Use on Working Memory. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2016; 21:148-155. [PMID: 26755684 PMCID: PMC4886321 DOI: 10.1093/deafed/env070] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Revised: 12/08/2015] [Accepted: 12/11/2015] [Indexed: 05/29/2023]
Abstract
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals.
Collapse
Affiliation(s)
- Marc Marschark
- National Technical Institute for the Deaf-Rochester Institute of Technology and
| | - Thomastine Sarchet
- National Technical Institute for the Deaf-Rochester Institute of Technology and
| | | |
Collapse
|
6
|
Marshall C, Jones A, Denmark T, Mason K, Atkinson J, Botting N, Morgan G. Deaf children's non-verbal working memory is impacted by their language experience. Front Psychol 2015; 6:527. [PMID: 25999875 PMCID: PMC4419661 DOI: 10.3389/fpsyg.2015.00527] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 04/13/2015] [Indexed: 12/03/2022] Open
Abstract
Several recent studies have suggested that deaf children perform more poorly on working memory tasks compared to hearing children, but these studies have not been able to determine whether this poorer performance arises directly from deafness itself or from deaf children's reduced language exposure. The issue remains unresolved because findings come mostly from (1) tasks that are verbal as opposed to non-verbal, and (2) involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities within a normal developmental timeframe for language acquisition). A more direct, and therefore stronger, test of the hypothesis that the type and quality of language exposure impact working memory is to use measures of non-verbal working memory (NVWM) and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition and reduced quality of language input compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n = 28), deaf children who were native users of British Sign Language (BSL; n = 8), and deaf children who used BSL but who were not native signers (n = 19). We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and whether scores on language tasks predicted scores on NVWM tasks. For the two executive-loaded NVWM tasks included in our battery, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another). Multiple regression analysis revealed that scores on the vocabulary measure predicted scores on those two executive-loaded NVWM tasks (with age and non-verbal reasoning partialled out). Our results suggest that whatever the language modality-spoken or signed-rich language experience from birth, and the good language skills that result from this early age of acquisition, play a critical role in the development of NVWM and in performance on NVWM tasks.
Collapse
Affiliation(s)
- Chloë Marshall
- Department of Psychology and Human Development, UCL Institute of Education, University College LondonLondon, UK
| | - Anna Jones
- Deafness, Cognition and Language Research Centre, University College LondonLondon, UK
| | - Tanya Denmark
- Deafness, Cognition and Language Research Centre, University College LondonLondon, UK
| | - Kathryn Mason
- Deafness, Cognition and Language Research Centre, University College LondonLondon, UK
| | - Joanna Atkinson
- Deafness, Cognition and Language Research Centre, University College LondonLondon, UK
| | - Nicola Botting
- Division of Language and Communication Sciences, City University LondonLondon, UK
| | - Gary Morgan
- Division of Language and Communication Sciences, City University LondonLondon, UK
| |
Collapse
|