1
|
Gimeno-Martínez M, Baus C. Characterizing language production across modalities. Cogn Neuropsychol 2024:1-17. [PMID: 38377394 DOI: 10.1080/02643294.2024.2315823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 12/02/2023] [Indexed: 02/22/2024]
Abstract
ABSTRACTThis study investigates factors influencing lexical access in language production across modalities (signed and oral). Data from deaf and hearing signers were reanalyzed (Baus and Costa, 2015, On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC). Brain Research, 1609(1), 40-53. https://doi.org/10.1016/j.brainres.2015.03.013; Gimeno-Martínez and Baus, 2022, Iconicity in sign language production: Task matters. Neuropsychologia, 167, 108166. https://doi.org/10.1016/j.neuropsychologia.2022.108166) testing the influence of psycholinguistic variables and ERP mean amplitudes on signing and naming latencies. Deaf signers' signing latencies were influenced by sign iconicity in the picture signing task, and by spoken psycholinguistic variables in the word-to-sign translation task. Additionally, ERP amplitudes before response influenced signing but not translation latencies. Hearing signers' latencies, both signing and naming, were influenced by sign iconicity and word frequency, with early ERP amplitudes predicting only naming latencies. These findings highlight general and modality-specific determinants of lexical access in language production.
Collapse
Affiliation(s)
- Marc Gimeno-Martínez
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Cristina Baus
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| |
Collapse
|
2
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
3
|
Gimeno-Martínez M, Baus C. Iconicity in sign language production: Task matters. Neuropsychologia 2022; 167:108166. [DOI: 10.1016/j.neuropsychologia.2022.108166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 12/21/2021] [Accepted: 01/23/2022] [Indexed: 10/19/2022]
|
4
|
The effects of multiple linguistic variables on picture naming in American Sign Language. Behav Res Methods 2021; 54:2502-2521. [PMID: 34918219 DOI: 10.3758/s13428-021-01751-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/14/2021] [Indexed: 11/08/2022]
Abstract
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.
Collapse
|
5
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
6
|
McGarry ME, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Picture-naming in American Sign Language: an electrophysiological study of the effects of iconicity and structured alignment. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:199-210. [PMID: 33732747 PMCID: PMC7959108 DOI: 10.1080/23273798.2020.1804601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 07/25/2020] [Indexed: 06/12/2023]
Abstract
A picture-naming task and ERPs were used to investigate effects of iconicity and visual alignment between signs and pictures in American Sign Language (ASL). For iconic signs, half the pictures visually overlapped with phonological features of the sign (e.g., the fingers of CAT align with a picture of a cat with prominent whiskers), while half did not (whiskers are not shown). Iconic signs were produced numerically faster than non-iconic signs and were associated with larger N400 amplitudes, akin to concreteness effects. Pictures aligned with iconic signs were named faster than non-aligned pictures, and there was a reduction in N400 amplitude. No behavioral effects were observed for the control group (English speakers). We conclude that sensory-motoric semantic features are represented more robustly for iconic than non-iconic signs (eliciting a concreteness-like N400 effect) and visual overlap between pictures and the phonological form of iconic signs facilitates lexical retrieval (eliciting a reduced N400).
Collapse
Affiliation(s)
- Meghan E. McGarry
- Joint Doctoral Program in Language and Communication Disorders, San Diego State University and University of California, San Diego, San Diego, CA USA
| | - Megan Mott
- Department of Psychology, San Diego State University, San Diego, CA USA
| | | | | | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA USA
| |
Collapse
|
7
|
Abstract
Seeing an object is a natural source for learning about the object's configuration. We show that language can also shape our knowledge about visual objects. We investigated sign language that enables deaf individuals to communicate through hand movements with as much expressive power as any other natural language. A few signs represent objects in a specific orientation. Sign-language users (signers) recognized visual objects faster when oriented as in the sign, and this match in orientation elicited specific brain responses in signers, as measured by event-related potentials (ERPs). Further analyses suggested that signers' responsiveness to object orientation derived from changes in the visual object representations induced by the signs. Our results also show that language facilitates discrimination between objects of the same kind (e.g., different cars), an effect never reported before with spoken languages. By focusing on sign language we could better characterize the impact of language (a uniquely human ability) on object visual processing.
Collapse
|
8
|
Emmorey K, Winsler K, Midgley KJ, Grainger J, Holcomb PJ. Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:249-267. [PMID: 33043298 PMCID: PMC7544239 DOI: 10.1162/nol_a_00012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 04/16/2020] [Indexed: 05/21/2023]
Abstract
To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer's hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000-1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words.
Collapse
Affiliation(s)
| | - Kurt Winsler
- Department of Psychology, University of California, Davis
| | | | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique
| | | |
Collapse
|
9
|
Miozzo M, Villabol M, Navarrete E, Peressotti F. Hands show where things are: The close similarity between sign and natural space. Cognition 2019; 196:104106. [PMID: 31841814 DOI: 10.1016/j.cognition.2019.104106] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 09/20/2019] [Accepted: 10/10/2019] [Indexed: 10/25/2022]
Abstract
Many of the signs produced across sign languages are iconic, in the sense that they resemble the concepts they represent. We examined whether location, one of basic sign parameters along with handshape and movement, is systematically used for purposes of iconicity. Our findings revealed a mapping of vertical sign space that is exploited in its entirety for encoding typical locations in natural space. In all of the twenty sign languages we analyzed, signs were more likely to have high locations with concepts typically occurring in high vs. low regions of natural space (e.g., cloud vs. root). Furthermore, the height of signs produced to identify a visual object varied depending on object position (e.g., it was higher for basketball in the basket than basketball on the floor). It thus appears that signing space is permeable to semantic and episodic features, and iconicity plays a crucial role in making signing space so adaptable.
Collapse
|
10
|
Ortega G. Iconicity and Sign Lexical Acquisition: A Review. Front Psychol 2017; 8:1280. [PMID: 28824480 PMCID: PMC5539242 DOI: 10.3389/fpsyg.2017.01280] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 07/13/2017] [Indexed: 11/30/2022] Open
Abstract
The study of iconicity, defined as the direct relationship between a linguistic form and its referent, has gained momentum in recent years across a wide range of disciplines. In the spoken modality, there is abundant evidence showing that iconicity is a key factor that facilitates language acquisition. However, when we look at sign languages, which excel in the prevalence of iconic structures, there is a more mixed picture, with some studies showing a positive effect and others showing a null or negative effect. In an attempt to reconcile the existing evidence the present review presents a critical overview of the literature on the acquisition of a sign language as first (L1) and second (L2) language and points at some factor that may be the source of disagreement. Regarding sign L1 acquisition, the contradicting findings may relate to iconicity being defined in a very broad sense when a more fine-grained operationalisation might reveal an effect in sign learning. Regarding sign L2 acquisition, evidence shows that there is a clear dissociation in the effect of iconicity in that it facilitates conceptual-semantic aspects of sign learning but hinders the acquisition of the exact phonological form of signs. It will be argued that when we consider the gradient nature of iconicity and that signs consist of a phonological form attached to a meaning we can discern how iconicity impacts sign learning in positive and negative ways.
Collapse
Affiliation(s)
- Gerardo Ortega
- Centre for Language Studies, Radboud UniversityNijmegen, Netherlands.,Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| |
Collapse
|
11
|
Williams JT, Stone A, Newman SD. Operationalization of Sign Language Phonological Similarity and its Effects on Lexical Access. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2017; 22:303-315. [PMID: 28575411 PMCID: PMC6364953 DOI: 10.1093/deafed/enx014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 03/20/2017] [Accepted: 04/12/2017] [Indexed: 06/07/2023]
Abstract
Cognitive mechanisms for sign language lexical access are fairly unknown. This study investigated whether phonological similarity facilitates lexical retrieval in sign languages using measures from a new lexical database for American Sign Language. Additionally, it aimed to determine which similarity metric best fits the present data in order to inform theories of how phonological similarity is constructed within the lexicon and to aid in the operationalization of phonological similarity in sign language. Sign repetition latencies and accuracy were obtained when native signers were asked to reproduce a sign displayed on a computer screen. Results indicated that, as predicted, phonological similarity facilitated repetition latencies and accuracy as long as there were no strict constraints on the type of sublexical features that overlapped. The data converged to suggest that one similarity measure, MaxD, defined as the overlap of any 4 sublexical features, likely best represents mechanisms of phonological similarity in the mental lexicon. Together, these data suggest that lexical access in sign language is facilitated by phonologically similar lexical representations in memory and the optimal operationalization is defined as liberal constraints on overlap of 4 out of 5 sublexical features-similar to the majority of extant definitions in the literature.
Collapse
|