1
|
Bradley C, Wilbur R. Visual Form and Event Semantics Predict Transitivity in Silent Gestures: Evidence for Compositionality. Cogn Sci 2023; 47:e13331. [PMID: 37635624 DOI: 10.1111/cogs.13331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/18/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023]
Abstract
Silent gesture is not considered to be linguistic, on par with spoken and sign languages. It is claimed that silent gestures, unlike language, represent events holistically, without compositional structure. However, recent research has demonstrated that gesturers use consistent strategies when representing objects and events, and that there are behavioral and clinically relevant limits on what form a gesture may take to effect a particular meaning. This systematicity challenges a holistic interpretation of silent gesture, which predicts that there should be no stable form-meaning correspondence across event representations. Here, we demonstrate to the contrary that untrained gesturers systematically manipulate the form of their gestures when representing events with and without a theme (e.g., Someone popped the balloon vs. Someone walked), that is, transitive and intransitive events. We elicited silent gestures and annotated them for manual features active in coding transitivity distinctions in sign languages. We trained linear support vector machines to make item-by-item transitivity predictions based on these features. Prediction accuracy was good across the entire dataset, thus demonstrating that systematicity in silent gesture can be explained with recourse to subunits. We argue that handshape features are constructs co-opted from cognitive systems subserving manual action production and comprehension for communicative purposes, which may integrate into the linguistic system of emerging sign languages. We further suggest that nonsigners tend to map event participants to each hand, a strategy found across genetically and geographically distinct sign languages, suggesting the strategy's cognitive foundation.
Collapse
Affiliation(s)
| | - Ronnie Wilbur
- Department of Linguistics, Purdue University
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
2
|
Emerging Lexicon for Objects in Central Taurus Sign Language. LANGUAGES 2022. [DOI: 10.3390/languages7020118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper investigates object-based and action-based iconic strategies and combinations of them to refer to everyday objects in the lexicon of an emerging village sign language, namely Central Taurus Sign Language (CTSL) of Turkey. CTSL naturally emerged in the absence of an accessible language model within the last half century. It provides a vantage point for how languages emerge, because it is relatively young and its very first creators are still alive today. Participants from two successive age cohorts were tested in two studies: (1) CTSL signers viewed 26 everyday objects in isolation and labeled them to an addressee in a picture-naming task, and (2) CTSL signers viewed 16 everyday objects in isolation and labeled them to an addressee before they viewed the same objects in context being acted upon by a human agent in short video clips and described the event in the clips to a communicative partner. The overall results show that the CTSL signers equally favored object-based and action-based iconic strategies with no significant difference across cohorts in the implementation of iconic strategies in both studies. However, there were significant differences in the implementation of iconic strategies in response to objects presented in isolation vs. context. Additionally, the CTSL-2 signers produced significantly longer sign strings than the CTSL-1 signers when objects were presented in isolation and significantly more combinatorial sign strings than the CTSL-1 signers. When objects were presented in context, both cohorts produced significantly shorter sign strings and more single-sign strings in the overall responses. The CTSL-2 signers still produced significantly more combinatorial sign strings in context. The two studies together portray the type and combination of iconic strategies in isolation vs. context in the emerging lexicon of a language system in its initial stages.
Collapse
|
3
|
Comparing Iconicity Trade-Offs in Cena and Libras during a Sign Language Production Task. LANGUAGES 2022. [DOI: 10.3390/languages7020098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Although classifier constructions generally aim for highly iconic depictions, like any other part of language they may be constrained by phonology. We compare utterances containing motion events between signers of Cena, an emerging rural sign language in Brazil, and Libras, the national sign language of Brazil, to investigate whether a difference in time-depth—a relevant factor in phonological reorganisation—influences trade-offs involving iconicity. First, we find that contrary to what may be expected, given that emerging sign languages exhibit great variation and favour highly iconic prototypes, Cena signers exhibit neither greater variation nor the use of more complex handshapes in classifier constructions. We also report a divergence from findings on Nicaraguan Sign Language (NSL) in how signers encode movement in a young language, showing that Cena signers tend to encode manner and path simultaneously, unlike NSL signers of comparable cohorts. Cena signers therefore pattern more like non-signing gesturers and signers of urban sign languages, including the Libras signers in our study. The study contributes an addition to the as-yet limited investigations into classifiers in emerging sign languages, demonstrating how different aspects of linguistic organisation, including phonology, can interact with classifier form.
Collapse
|
4
|
Bradley C, Malaia EA, Siskind JM, Wilbur RB. Visual form of ASL verb signs predicts non-signer judgment of transitivity. PLoS One 2022; 17:e0262098. [PMID: 35213558 PMCID: PMC8880903 DOI: 10.1371/journal.pone.0262098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 12/17/2021] [Indexed: 11/18/2022] Open
Abstract
Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event's underlying representation and its syntactic encoding, such that-for example-the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs' visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.
Collapse
Affiliation(s)
- Chuck Bradley
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
| | - Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama, United States of America
| | - Jeffrey Mark Siskind
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- Elmore Family School School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, United States of America
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| |
Collapse
|
5
|
Kirby S, Tamariz M. Cumulative cultural evolution, population structure and the origin of combinatoriality in human language. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200319. [PMID: 34894728 PMCID: PMC8666903 DOI: 10.1098/rstb.2020.0319] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2021] [Indexed: 11/28/2022] Open
Abstract
Language is the primary repository and mediator of human collective knowledge. A central question for evolutionary linguistics is the origin of the combinatorial structure of language (sometimes referred to as duality of patterning), one of language's basic design features. Emerging sign languages provide a promising arena to study the emergence of language properties. Many, but not all such sign languages exhibit combinatoriality, which generates testable hypotheses about its source. We hypothesize that combinatoriality is the inevitable result of learning biases in cultural transmission, and that population structure explains differences across languages. We construct an agent-based model with population turnover. Bayesian learning agents with a prior preference for compressible languages (modelling a pressure for language learnability) communicate in pairs under pressure to reduce ambiguity. We include two transmission conditions: agents learn the language either from the oldest agent or from an agent in the middle of their lifespan. Results suggest that (1) combinatoriality emerges during iterated cultural transmission under concurrent pressures for simplicity and expressivity and (2) population dynamics affect the rate of evolution, which is faster when agents learn from other learners than when they learn from old individuals. This may explain its absence in some emerging sign languages. We discuss the consequences of this finding for cultural evolution, highlighting the interplay of population-level, functional and cognitive factors. This article is part of a discussion meeting issue 'The emergence of collective knowledge and cumulative culture in animals, humans and machines'.
Collapse
Affiliation(s)
- Simon Kirby
- Centre for Language Evolution, University of Edinburgh, Edinburgh, UK
| | - Monica Tamariz
- Department of Psychology, Heriot-Watt University, Edinburgh, UK
| |
Collapse
|
6
|
Frederiksen AT. Emerging ASL Distinctions in Sign-Speech Bilinguals' Signs and Co-speech Gestures in Placement Descriptions. Front Psychol 2021; 12:686485. [PMID: 34413812 PMCID: PMC8369348 DOI: 10.3389/fpsyg.2021.686485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/23/2021] [Indexed: 11/18/2022] Open
Abstract
Previous work on placement expressions (e.g., “she put the cup on the table”) has demonstrated cross-linguistic differences in the specificity of placement expressions in the native language (L1), with some languages preferring more general, widely applicable expressions and others preferring more specific expressions based on more fine-grained distinctions. Research on second language (L2) acquisition of an additional spoken language has shown that learning the appropriate L2 placement distinctions poses a challenge for adult learners whose L2 semantic representations can be non-target like and have fuzzy boundaries. Unknown is whether similar effects apply to learners acquiring a L2 in a different sensory-motor modality, e.g., hearing learners of a sign language. Placement verbs in signed languages tend to be highly iconic and to exhibit transparent semantic boundaries. This may facilitate acquisition of signed placement verbs. In addition, little is known about how exposure to different semantic boundaries in placement events in a typologically different language affects lexical semantic meaning in the L1. In this study, we examined placement event descriptions (in American Sign Language (ASL) and English) in hearing L2 learners of ASL who were native speakers of English. L2 signers' ASL placement descriptions looked similar to those of two Deaf, native ASL signer controls, suggesting that the iconicity and transparency of placement distinctions in the visual modality may facilitate L2 acquisition. Nevertheless, L2 signers used a wider range of handshapes in ASL and used them less appropriately, indicating that fuzzy semantic boundaries occur in cross-modal L2 acquisition as well. In addition, while the L2 signers' English verbal expressions were not different from those of a non-signing control group, placement distinctions expressed in co-speech gesture were marginally more ASL-like for L2 signers, suggesting that exposure to different semantic boundaries can cause changes to how placement is conceptualized in the L1 as well.
Collapse
Affiliation(s)
- Anne Therese Frederiksen
- Department of Linguistics University of California, San Diego, La Jolla, CA, United States.,Department of Language Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
7
|
Rodrigues ED, Santos AJ, Veppo F, Pereira J, Hobaiter C. Connecting primate gesture to the evolutionary roots of language: A systematic review. Am J Primatol 2021; 83:e23313. [PMID: 34358359 DOI: 10.1002/ajp.23313] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 07/15/2021] [Accepted: 07/24/2021] [Indexed: 01/29/2023]
Abstract
Comparative psychology provides important contributions to our understanding of the origins of human language. The presence of common features in human and nonhuman primate communication can be used to suggest the evolutionary trajectories of potential precursors to language. However, to do so effectively, our findings must be comparable across diverse species. This systematic review describes the current landscape of data available from studies of gestural communication in human and nonhuman primates that make an explicit connection to language evolution. We found a similar number of studies on human and nonhuman primates, but that very few studies included data from more than one species. As a result, evolutionary inferences remain restricted to comparison across studies. We identify areas of focus, bias, and apparent gaps within the field. Different domains have been studied in human and nonhuman primates, with relatively few nonhuman primate studies of ontogeny and relatively few human studies of gesture form. Diversity in focus, methods, and socio-ecological context fill important gaps and provide nuanced understanding, but only where the source of any difference between studies is transparent. Many studies provide some definition for their use of gesture; but definitions of gesture, and in particular, criteria for intentional use, are absent in the majority of human studies. We find systematic differences between human and nonhuman primate studies in the research scope, incorporation of other modalities, research setting, and study design. We highlight eight particular areas in a call to action through which we can strengthen our ability to investigate gestural communication's contribution within the evolutionary roots of human language.
Collapse
Affiliation(s)
- Evelina D Rodrigues
- William James Center for Research, ISPA-Instituto Universitário, Lisbon, Portugal
| | - António J Santos
- William James Center for Research, ISPA-Instituto Universitário, Lisbon, Portugal
| | - Flávia Veppo
- Department of Applied Psychology, School of Psychology, University of Minho, Braga, Portugal
| | - Joana Pereira
- Centre for Ecology, Evolution and Environmental Changes, Faculdade de Ciências, Universidade de Lisboa, Lisboa, Portugal
| | - Catherine Hobaiter
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Scotland, UK
| |
Collapse
|
8
|
Abner N, Namboodiripad S, Spaepen E, Goldin-Meadow S. Emergent Morphology in Child Homesign: Evidence from Number Language. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2021; 18:16-40. [PMID: 35603228 PMCID: PMC9122328 DOI: 10.1080/15475441.2021.1922281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and non-cardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.
Collapse
Affiliation(s)
- Natasha Abner
- Department of Linguistics, University of Michigan, Ann Arbor, MI USA Savithry, Namboodiripad, Spaepen
| | - Savithry Namboodiripad
- Department of Linguistics, University of Michigan, Ann Arbor, MI USA Savithry, Namboodiripad, Spaepen
| | | | | |
Collapse
|
9
|
Napoli DJ, Ferrara C. Correlations Between Handshape and Movement in Sign Languages. Cogn Sci 2021; 45:e12944. [PMID: 34018242 PMCID: PMC8243953 DOI: 10.1111/cogs.12944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 12/27/2020] [Accepted: 12/31/2020] [Indexed: 12/04/2022]
Abstract
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.
Collapse
|
10
|
Rissman L, Horton L, Flaherty M, Senghas A, Coppola M, Brentari D, Goldin-Meadow S. The communicative importance of agent-backgrounding: Evidence from homesign and Nicaraguan Sign Language. Cognition 2020; 203:104332. [PMID: 32559513 DOI: 10.1016/j.cognition.2020.104332] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 05/11/2020] [Accepted: 05/18/2020] [Indexed: 10/24/2022]
Abstract
Some concepts are more essential for human communication than others. In this paper, we investigate whether the concept of agent-backgrounding is sufficiently important for communication that linguistic structures for encoding this concept are present in young sign languages. Agent-backgrounding constructions serve to reduce the prominence of the agent - the English passive sentence a book was knocked over is an example. Although these constructions are widely attested cross-linguistically, there is little prior research on the emergence of such devices in new languages. Here we studied how agent-backgrounding constructions emerge in Nicaraguan Sign Language (NSL) and adult homesign systems. We found that NSL signers have innovated both lexical and morphological devices for expressing agent-backgrounding, indicating that conveying a flexible perspective on events has deep communicative value. At the same time, agent-backgrounding devices did not emerge at the same time as agentive devices. This result suggests that agent-backgrounding does not have the same core cognitive status as agency. The emergence of agent-backgrounding morphology appears to depend on receiving a linguistic system as input in which linguistic devices for expressing agency are already well-established.
Collapse
Affiliation(s)
- Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison, 1202 W. Johnson St., Madison, WI 53706, United States of America.
| | - Laura Horton
- Department of Linguistics, University of Texas at Austin, 305 E. 23rd Street, Austin, TX 78712, United States of America.
| | - Molly Flaherty
- Department of Psychology, Swarthmore College, 500 College Avenue, Swarthmore, PA 19081, United States of America.
| | - Ann Senghas
- Department of Psychology, Barnard College, 3009 Broadway, New York, NY 10027, United States of America.
| | - Marie Coppola
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Unit 1020, Storrs, CT 06269, United States of America; Department of Linguistics, University of Connecticut, 365 Fairfield Way, Unit 1145, Storrs, CT 06269, United States of America.
| | - Diane Brentari
- Center for Gesture, Sign, and Language, University of Chicago, Chicago, IL 60637, United States of America; Department of Linguistics, University of Chicago, 1115 E. 58th Street, Chicago, IL 60637, United States of America.
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 S. University Ave., Chicago, IL 60637, United States of America; Center for Gesture, Sign, and Language, University of Chicago, Chicago, IL 60637, United States of America.
| |
Collapse
|
11
|
Abstract
Recent years have witnessed a growing interest in behavioral and neuroimaging studies on the processing of symbolic communicative gestures, such as pantomimes and emblems, but well-controlled stimuli have been scarce. This study describes a dataset of more than 200 video clips of an actress performing pantomimes (gestures that mimic object-directed/object-use actions; e.g., playing guitar), emblems (conventional gestures; e.g., thumbs up), and meaningless gestures. Gestures were divided into four lists. For each of these four lists, 50 Italian and 50 American raters judged the meaningfulness of the gestures and provided names and descriptions for them. The results of these rating and norming measures are reported separately for the Italian and American raters, offering the first normed set of meaningful and meaningless gestures for experimental studies. The stimuli are available for download via the Figshare database.
Collapse
|
12
|
Miozzo M, Villabol M, Navarrete E, Peressotti F. Hands show where things are: The close similarity between sign and natural space. Cognition 2019; 196:104106. [PMID: 31841814 DOI: 10.1016/j.cognition.2019.104106] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 09/20/2019] [Accepted: 10/10/2019] [Indexed: 10/25/2022]
Abstract
Many of the signs produced across sign languages are iconic, in the sense that they resemble the concepts they represent. We examined whether location, one of basic sign parameters along with handshape and movement, is systematically used for purposes of iconicity. Our findings revealed a mapping of vertical sign space that is exploited in its entirety for encoding typical locations in natural space. In all of the twenty sign languages we analyzed, signs were more likely to have high locations with concepts typically occurring in high vs. low regions of natural space (e.g., cloud vs. root). Furthermore, the height of signs produced to identify a visual object varied depending on object position (e.g., it was higher for basketball in the basket than basketball on the floor). It thus appears that signing space is permeable to semantic and episodic features, and iconicity plays a crucial role in making signing space so adaptable.
Collapse
|
13
|
Janke V, Marshall CR. Using the Hands to Represent Objects in Space: Gesture as a Substrate for Signed Language Acquisition. Front Psychol 2017; 8:2007. [PMID: 29250001 PMCID: PMC5715371 DOI: 10.3389/fpsyg.2017.02007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 11/02/2017] [Indexed: 11/19/2022] Open
Abstract
An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the grammar of the language being learned.
Collapse
Affiliation(s)
- Vikki Janke
- English Language and Linguistics, University of Kent, Canterbury, United Kingdom
| | - Chloë R. Marshall
- Department of Psychology and Human Development, UCL Institute of Education, London, United Kingdom
| |
Collapse
|
14
|
Cartmill EA, Rissman L, Novack M, Goldin-Meadow S. The development of iconicity in children's co-speech gesture and homesign. ACTA ACUST UNITED AC 2017; 8:42-68. [PMID: 29034011 DOI: 10.1075/lia.8.1.03car] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children's use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication (homesign). We focus on one particular element of iconic gesture - the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1-5-year-olds, and how it relates to the semantic content of children's communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children's ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.
Collapse
|
15
|
van Nispen K, van de Sandt-Koenderman WME, Krahmer E. Production and Comprehension of Pantomimes Used to Depict Objects. Front Psychol 2017; 8:1095. [PMID: 28744232 PMCID: PMC5504161 DOI: 10.3389/fpsyg.2017.01095] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 06/13/2017] [Indexed: 11/26/2022] Open
Abstract
Pantomime, gesture in absence of speech, has no conventional meaning. Nevertheless, individuals seem to be able to produce pantomimes and derive meaning from pantomimes. A number of studies has addressed the use of co-speech gesture, but little is known on pantomime. Therefore, the question of how people construct and understand pantomimes arises in gesture research. To determine how people use pantomimes, we asked participants to depict a set of objects using pantomimes only. We annotated what representation techniques people produced. Furthermore, using judgment tasks, we assessed the pantomimes' comprehensibility. Analyses showed that similar techniques were used to depict objects across individuals. Objects with a default depiction method were better comprehended than objects for which there was no such default. More specifically, tools and objects depicted using a handling technique were better understood. The open-answer experiment showed low interpretation accuracy. Conversely, the forced-choice experiment showed ceiling effects. These results suggest that across individuals, similar strategies are deployed to produce pantomime, with the handling technique as the apparent preference. This might indicate that the production of pantomimes is based on mental representations which are intrinsically similar. Furthermore, pantomime conveys semantically rich, but ambiguous, information, and its interpretation is much dependent on context. This pantomime database is available online: https://dataverse.nl/dataset.xhtml?persistentId=hdl:10411/QZHO6M. This can be used as a baseline with which we can compare clinical groups.
Collapse
Affiliation(s)
- Karin van Nispen
- Tilburg Center for Cognition and Communication, Department of Communication and Information Sciences, Tilburg UniversityTilburg, Netherlands
| | - W Mieke E van de Sandt-Koenderman
- Rijndam Rehabilitation Center, RoNeResRotterdam, Netherlands.,Erasmus Medical Center, Institute of Rehabilitation MedicineRotterdam, Netherlands
| | - Emiel Krahmer
- Tilburg Center for Cognition and Communication, Department of Communication and Information Sciences, Tilburg UniversityTilburg, Netherlands
| |
Collapse
|
16
|
Gagne DL, Coppola M. Visible Social Interactions Do Not Support the Development of False Belief Understanding in the Absence of Linguistic Input: Evidence from Deaf Adult Homesigners. Front Psychol 2017. [PMID: 28626432 PMCID: PMC5454053 DOI: 10.3389/fpsyg.2017.00837] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Congenitally deaf individuals exhibit enhanced visuospatial abilities relative to normally hearing individuals. An early example is the increased sensitivity of deaf signers to stimuli in the visual periphery (Neville and Lawson, 1987a). While these enhancements are robust and extend across a number of visual and spatial skills, they seem not to extend to other domains which could potentially build on these enhancements. For example, congenitally deaf children, in the absence of adequate language exposure and acquisition, do not develop typical social cognition skills as measured by traditional Theory of Mind tasks. These delays/deficits occur despite their presumed lifetime use of visuo-perceptual abilities to infer the intentions and behaviors of others (e.g., Pyers and Senghas, 2009; O’Reilly et al., 2014). In a series of studies, we explore the limits on the plasticity of visually based socio-cognitive abilities, from perspective taking to Theory of Mind/False Belief, in rarely studied individuals: deaf adults who have not acquired a conventional language (Homesigners). We compared Homesigners’ performance to that of two other understudied groups in the same culture: Deaf signers of an emerging language (Cohort 1 of Nicaraguan Sign Language), and hearing speakers of Spanish with minimal schooling. We found that homesigners performed equivalently to both comparison groups with respect to several visual socio-cognitive abilities: Perspective Taking (Levels 1 and 2), adapted from Masangkay et al. (1974), and the False Photograph task, adapted from Leslie and Thaiss (1992). However, a lifetime of visuo-perceptual experiences (observing the behavior and interactions of others) did not support success on False Belief tasks, even when linguistic demands were minimized. Participants in the comparison groups outperformed the Homesigners, but did not universally pass the False Belief tasks. Our results suggest that while some of the social development achievements of young typically developing children may be dissociable from their linguistic experiences, language and/or educational experiences clearly scaffolds the transition into False Belief understanding. The lack of experience using a shared language cannot be overcome, even with the benefit of many years of observing others’ behaviors and the potential neural reorganization and visuospatial enhancements resulting from deafness.
Collapse
Affiliation(s)
- Deanna L Gagne
- Department of Psychological Sciences, University of Connecticut, StorrsCT, United States
| | - Marie Coppola
- Department of Psychological Sciences, University of Connecticut, StorrsCT, United States.,Department of Linguistics, University of Connecticut, StorrsCT, United States
| |
Collapse
|
17
|
Abstract
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, Illinois 60637
| | | |
Collapse
|
18
|
Goldin-Meadow S, Brentari D. Gesture, sign, and language: The coming of age of sign language and gesture studies. Behav Brain Sci 2017; 40:e46. [PMID: 26434499 PMCID: PMC4821822 DOI: 10.1017/s0140525x15001247] [Citation(s) in RCA: 124] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Departments of Psychology and Comparative Human Development,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://goldin-meadow-lab.uchicago.edu
| | - Diane Brentari
- Department of Linguistics,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://signlanguagelab.uchicago.edu
| |
Collapse
|
19
|
Carrigan EM, Coppola M. Successful communication does not drive language development: Evidence from adult homesign. Cognition 2016; 158:10-27. [PMID: 27771538 DOI: 10.1016/j.cognition.2016.09.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 07/07/2016] [Accepted: 09/28/2016] [Indexed: 10/20/2022]
Abstract
Constructivist accounts of language acquisition maintain that the language learner aims to match a target provided by mature users. Communicative problem solving in the context of social interaction and matching a linguistic target or model are presented as primary mechanisms driving the language development process. However, research on the development of homesign gesture systems by deaf individuals who have no access to a linguistic model suggests that aspects of language can develop even when typical input is unavailable. In four studies, we examined the role of communication in the genesis of homesign systems by assessing how well homesigners' family members comprehend homesign productions. In Study 1, homesigners' mothers showed poorer comprehension of homesign descriptions produced by their now-adult deaf child than of spoken Spanish descriptions of the same events produced by one of their adult hearing children. Study 2 found that the younger a family member was when they first interacted with their deaf relative, the better they understood the homesigner. Despite this, no family member comprehended homesign productions at levels that would be expected if family members co-generated homesign systems with their deaf relative via communicative interactions. Study 3 found that mothers' poor or incomplete comprehension of homesign was not a result of incomplete homesign descriptions. In Study 4 we demonstrated that Deaf native users of American Sign Language, who had no previous experience with the homesigners or their homesign systems, nevertheless comprehended homesign productions out of context better than the homesigners' mothers. This suggests that homesign has comprehensible structure, to which mothers and other family members are not fully sensitive. Taken together, these studies show that communicative problem solving is not responsible for the development of structure in homesign systems. The role of this mechanism must therefore be re-evaluated in constructivist theories of language development.
Collapse
|
20
|
Brentari D, Coppola M, Cho PW, Senghas A. Handshape complexity as a precursor to phonology: Variation, Emergence, and Acquisition. LANGUAGE ACQUISITION 2016; 24:283-306. [PMID: 33033424 PMCID: PMC7540628 DOI: 10.1080/10489223.2016.1187614] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper two dimensions of handshape complexity are analyzed as potential building blocks of phonological contrast-joint complexity and finger group complexity. We ask whether sign language patterns are elaborations of those seen in the gestures produced by hearing people without speech (pantomime) or a more radical re-organization of them. Data from adults and children are analyzed to address issues of cross-linguistic variation, emergence, and acquisition. Study 1 addresses these issues in adult signers and gesturers from the United States, Italy, China, and Nicaragua. Study 2 addresses these issues in child and adult groups (signers and gesturers) from the United States, Italy, and Nicaragua. We argue that handshape undergoes a fairly radical reorganization, including loss and reorganization of iconicity and feature redistribution, as phonologization takes place in both of these dimensions. Moreover, while the patterns investigated here are not evidence of duality of patterning, we conclude that they are indeed phonological, and that they appear earlier than related morphosyntactic patterns that use the same types of handshape.
Collapse
|
21
|
Berent I. Commentary: "An Evaluation of Universal Grammar and the Phonological Mind"-UG Is Still a Viable Hypothesis. Front Psychol 2016; 7:1029. [PMID: 27471480 PMCID: PMC4943953 DOI: 10.3389/fpsyg.2016.01029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
Everett (2016b) criticizes The Phonological Mind thesis (Berent, 2013a,b) on logical, methodological and empirical grounds. Most of Everett’s concerns are directed toward the hypothesis that the phonological grammar is constrained by universal grammatical (UG) principles. Contrary to Everett’s logical challenges, here I show that the UG hypothesis is readily falsifiable, that universality is not inconsistent with innateness (Everett’s arguments to the contrary are rooted in a basic confusion of the UG phenotype and the genotype), and that its empirical evaluation does not require a full evolutionary account of language. A detailed analysis of one case study, the syllable hierarchy, presents a specific demonstration that people have knowledge of putatively universal principles that are unattested in their language and these principles are most likely linguistic in nature. Whether Universal Grammar exists remains unknown, but Everett’s arguments hardly undermine the viability of this hypothesis.
Collapse
Affiliation(s)
- Iris Berent
- Phonology and Reading Laboratory, Department of Psychology, Northeastern University, Boston MA, USA
| |
Collapse
|
22
|
Horton L, Goldin-Meadow S, Coppola M, Senghas A, Brentari D. Forging a morphological system out of two dimensions: Agentivity and number. OPEN LINGUISTICS 2015; 1:596-613. [PMID: 26740937 PMCID: PMC4699575 DOI: 10.1515/opli-2015-0021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature - unpunctuated repetition - in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).
Collapse
Affiliation(s)
| | | | - M. Coppola
- University of Connecticut, Storrs, CT, 06269, USA
| | - A. Senghas
- Barnard College, New York, NY, 10027, USA
| | - D. Brentari
- University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
23
|
Goldin-Meadow S. Studying the mechanisms of language learning by varying the learning environment and the learner. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 30:899-911. [PMID: 26668813 PMCID: PMC4676577 DOI: 10.1080/23273798.2015.1016978] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model - the resilient properties of language - and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 South University Avenue, Chicago, IL 60637, USA
| |
Collapse
|
24
|
Goldin-Meadow S, Brentari D, Coppola M, Horton L, Senghas A. Watching language grow in the manual modality: nominals, predicates, and handshapes. Cognition 2015; 136:381-95. [PMID: 25546342 PMCID: PMC4308574 DOI: 10.1016/j.cognition.2014.11.029] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2013] [Revised: 11/09/2014] [Accepted: 11/17/2014] [Indexed: 11/18/2022]
Abstract
All languages, both spoken and signed, make a formal distinction between two types of terms in a proposition--terms that identify what is to be talked about (nominals) and terms that say something about this topic (predicates). Here we explore conditions that could lead to this property by charting its development in a newly emerging language--Nicaraguan Sign Language (NSL). We examine how handshape is used in nominals vs. predicates in three Nicaraguan groups: (1) homesigners who are not part of the Deaf community and use their own gestures, called homesigns, to communicate; (2) NSL cohort 1 signers who fashioned the first stage of NSL; (3) NSL cohort 2 signers who learned NSL from cohort 1. We compare these three groups to a fourth: (4) native signers of American Sign Language (ASL), an established sign language. We focus on handshape in predicates that are part of a productive classifier system in ASL; handshape in these predicates varies systematically across agent vs. no-agent contexts, unlike handshape in the nominals we study, which does not vary across these contexts. We found that all four groups, including homesigners, used handshape differently in nominals vs. predicates--they displayed variability in handshape form across agent vs. no-agent contexts in predicates, but not in nominals. Variability thus differed in predicates and nominals: (1) In predicates, the variability across grammatical contexts (agent vs. no-agent) was systematic in all four groups, suggesting that handshape functioned as a productive morphological marker on predicate signs, even in homesign. This grammatical use of handshape can thus appear in the earliest stages of an emerging language. (2) In nominals, there was no variability across grammatical contexts (agent vs. no-agent), but there was variability within- and across-individuals in the handshape used in the nominal for a particular object. This variability was striking in homesigners (an individual homesigner did not necessarily use the same handshape in every nominal he produced for a particular object), but decreased in the first cohort of NSL and remained relatively constant in the second cohort. Stability in the lexical use of handshape in nominals thus does not seem to emerge unless there is pressure from a peer linguistic community. Taken together, our findings argue that a community of users is essential to arrive at a stable nominal lexicon, but not to establish a productive morphological marker in predicates. Examining the steps a manual communication system takes as it moves toward becoming a fully-fledged language offers a unique window onto factors that have made human language what it is.
Collapse
Affiliation(s)
| | | | - M Coppola
- University of Connecticut, United States
| | - L Horton
- University of Chicago, United States
| | | |
Collapse
|
25
|
Perniss P, Özyürek A, Morgan G. The Influence of the Visual Modality on Language Structure and Conventionalization: Insights From Sign Language and Gesture. Top Cogn Sci 2015; 7:2-11. [DOI: 10.1111/tops.12127] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Revised: 11/13/2014] [Accepted: 11/13/2014] [Indexed: 11/29/2022]
Affiliation(s)
| | - Asli Özyürek
- MPI for Psycholinguistics & Radboud University Nijmegen
| | - Gary Morgan
- Deafness; Cognition and Language Research Centre & City University London
| |
Collapse
|
26
|
Goldin-Meadow S. The impact of time on predicate forms in the manual modality: signers, homesigners, and silent gesturers. Top Cogn Sci 2015; 7:169-84. [PMID: 25329421 PMCID: PMC4310783 DOI: 10.1111/tops.12119] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2013] [Revised: 11/11/2013] [Accepted: 02/07/2014] [Indexed: 11/27/2022]
Abstract
It is difficult to create spoken forms that can be understood on the spot. But the manual modality, in large part because of its iconic potential, allows us to construct forms that are immediately understood, thus requiring essentially no time to develop. This paper contrasts manual forms for actions produced over three time spans-by silent gesturers who are asked to invent gestures on the spot; by homesigners who have created gesture systems over their life spans; and by signers who have learned a conventional sign language from other signers-and finds that properties of the predicate differ across these time spans. Silent gesturers use location to establish co-reference in the way established sign languages do, but they show little evidence of the segmentation sign languages display in motion forms for manner and path, and little evidence of the finger complexity sign languages display in handshapes in predicates representing events. Homesigners, in contrast, not only use location to establish co-reference but also display segmentation in their motion forms for manner and path and finger complexity in their object handshapes, although they have not yet decreased finger complexity to the levels found in sign languages in their handling handshapes. The manual modality thus allows us to watch language as it grows, offering insight into factors that may have shaped and may continue to shape human language.
Collapse
|
27
|
Brentari D, Renzo AD, Keane J, Volterra V. Cognitive, Cultural, and Linguistic Sources of a Handshape Distinction Expressing Agentivity. Top Cogn Sci 2014; 7:95-123. [DOI: 10.1111/tops.12123] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2013] [Revised: 11/21/2013] [Accepted: 01/24/2014] [Indexed: 11/27/2022]
Affiliation(s)
| | - Alessio Di Renzo
- National Research Council (CNR); Institute of Cognitive Sciences and Technologies
| | | | - Virginia Volterra
- National Research Council (CNR); Institute of Cognitive Sciences and Technologies
| |
Collapse
|
28
|
Morgan G. On language acquisition in speech and sign: development of combinatorial structure in both modalities. Front Psychol 2014; 5:1217. [PMID: 25426085 PMCID: PMC4227467 DOI: 10.3389/fpsyg.2014.01217] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 10/07/2014] [Indexed: 11/25/2022] Open
Abstract
Languages are composed of a conventionalized system of parts which allow speakers and signers to generate an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: “how do I make a language with combinatorial structure”? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Collapse
Affiliation(s)
- Gary Morgan
- Language and Communication Science, City University London, London UK
| |
Collapse
|
29
|
Quinto-Pozos D, Parrill F. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives. Top Cogn Sci 2014; 7:12-35. [PMID: 25348839 DOI: 10.1111/tops.12120] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Revised: 03/04/2014] [Accepted: 05/02/2014] [Indexed: 11/29/2022]
Abstract
Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively.
Collapse
|
30
|
Marshall CR, Morgan G. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language. Top Cogn Sci 2014; 7:61-80. [PMID: 25329326 DOI: 10.1111/tops.12118] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2013] [Revised: 10/28/2014] [Accepted: 02/17/2014] [Indexed: 11/30/2022]
Abstract
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.
Collapse
|
31
|
Goldin-Meadow S. Widening the lens: what the manual modality reveals about language, learning and cognition. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130295. [PMID: 25092663 PMCID: PMC4123674 DOI: 10.1098/rstb.2013.0295] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. These homemade gesture systems constitute the first step in the emergence of manual sign systems that are shared within deaf communities and are full-fledged languages. We end by widening the lens on sign language to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture's ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible, gesture works along with language, providing an additional representational format that can promote learning.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 South University Avenue, Chicago, IL 60637, USA
| |
Collapse
|
32
|
Coppola M, Brentari D. From iconic handshapes to grammatical contrasts: longitudinal evidence from a child homesigner. Front Psychol 2014; 5:830. [PMID: 25191283 PMCID: PMC4139701 DOI: 10.3389/fpsyg.2014.00830] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Accepted: 07/11/2014] [Indexed: 11/25/2022] Open
Abstract
Many sign languages display crosslinguistic consistencies in the use of two iconic aspects of handshape, handshape type and finger group complexity. Handshape type is used systematically in form-meaning pairings (morphology): Handling handshapes (Handling-HSs), representing how objects are handled, tend to be used to express events with an agent ("hand-as-hand" iconicity), and Object handshapes (Object-HSs), representing an object's size/shape, are used more often to express events without an agent ("hand-as-object" iconicity). Second, in the distribution of meaningless properties of form (morphophonology), Object-HSs display higher finger group complexity than Handling-HSs. Some adult homesigners, who have not acquired a signed or spoken language and instead use a self-generated gesture system, exhibit these two properties as well. This study illuminates the development over time of both phenomena for one child homesigner, "Julio," age 7;4 (years; months) to 12;8. We elicited descriptions of events with and without agents to determine whether morphophonology and morphosyntax can develop without linguistic input during childhood, and whether these structures develop together or independently. Within the time period studied: (1) Julio used handshape type differently in his responses to vignettes with and without an agent; however, he did not exhibit the same pattern that was found previously in signers, adult homesigners, or gesturers: while he was highly likely to use a Handling-HS for events with an agent (82%), he was less likely to use an Object-HS for non-agentive events (49%); i.e., his productions were heavily biased toward Handling-HSs; (2) Julio exhibited higher finger group complexity in Object- than in Handling-HSs, as in the sign language and adult homesigner groups previously studied; and (3) these two dimensions of language developed independently, with phonological structure showing a sign language-like pattern at an earlier age than morphosyntactic structure. We conclude that iconicity alone is not sufficient to explain the development of linguistic structure in homesign systems. Linguistic input is not required for some aspects of phonological structure to emerge in childhood, and while linguistic input is not required for morphology either, it takes time to emerge in homesign.
Collapse
Affiliation(s)
- Marie Coppola
- Departments of Psychology and Linguistics, Language Creation Laboratory, University of ConnecticutStorrs, CT, USA
| | - Diane Brentari
- Department of Linguistics, Sign Language Laboratory, University of ChicagoChicago, IL, USA
| |
Collapse
|
33
|
Goldin-Meadow S. In search of resilient and fragile properties of language. JOURNAL OF CHILD LANGUAGE 2014; 41 Suppl 1:64-77. [PMID: 25023497 PMCID: PMC4100075 DOI: 10.1017/s030500091400021x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Young children are skilled language learners. They apply their skills to the language input they receive from their parents and, in this way, derive patterns that are statistically related to their input. But being an excellent statistical learner does not explain why children who are not exposed to usable linguistic input nevertheless communicate using systems containing the fundamental properties of language. Nor does it explain why learners sometimes alter the linguistic input to which they are exposed (input from either a natural or an artificial language). These observations suggest that children are prepared to learn language. Our task now, as it was in 1974, is to figure out what they are prepared with - to identify properties of language that are relatively easy to learn, the resilient properties, as well as properties of language that are more difficult to learn, the fragile properties. The new tools and paradigms for describing and explaining language learning that have been introduced into the field since 1974 offer great promise for accomplishing this task.
Collapse
|
34
|
Applebaum L, Coppola M, Goldin-Meadow S. Prosody in a communication system developed without a language model. SIGN LANGUAGE AND LINGUISTICS 2014; 17:181-212. [PMID: 25574153 PMCID: PMC4285364 DOI: 10.1075/sll.17.2.02app] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Prosody, he "music" of language, is an important aspect of all natural languages, spoken and signed. We ask here whether prosody is also robust across learning conditions. If a child were not exposed to a conventional language and had to construct his own communication system, would that system contain prosodic structure? We address this question by observing a deaf child who received no sign language input and whose hearing loss prevented him from acquiring spoken language. Despite his lack of a conventional language model, this child developed his own gestural system. In this system, features known to mark phrase and utterance boundaries in established sign languages were used to consistently mark the ends of utterances, but not to mark phrase or utterance internal boundaries. A single child can thus develop the seeds of a prosodic system, but full elaboration may require more time, more users, or even more generations to blossom.
Collapse
|
35
|
Berent I. The phonological mind. Trends Cogn Sci 2013; 17:319-27. [DOI: 10.1016/j.tics.2013.05.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 05/06/2013] [Accepted: 05/09/2013] [Indexed: 11/28/2022]
|
36
|
Abstract
All spoken languages encode syllables and constrain their internal structure. But whether these restrictions concern the design of the language system, broadly, or speech, specifically, remains unknown. To address this question, here, we gauge the structure of signed syllables in American Sign Language (ASL). Like spoken languages, signed syllables must exhibit a single sonority/energy peak (i.e., movement). Four experiments examine whether this restriction is enforced by signers and nonsigners. We first show that Deaf ASL signers selectively apply sonority restrictions to syllables (but not morphemes) in novel ASL signs. We next examine whether this principle might further shape the representation of signed syllables by nonsigners. Absent any experience with ASL, nonsigners used movement to define syllable-like units. Moreover, the restriction on syllable structure constrained the capacity of nonsigners to learn from experience. Given brief practice that implicitly paired syllables with sonority peaks (i.e., movement)—a natural phonological constraint attested in every human language—nonsigners rapidly learned to selectively rely on movement to define syllables and they also learned to partly ignore it in the identification of morpheme-like units. Remarkably, nonsigners failed to learn an unnatural rule that defines syllables by handshape, suggesting they were unable to ignore movement in identifying syllables. These findings indicate that signed and spoken syllables are subject to a shared phonological restriction that constrains phonological learning in a new modality. These conclusions suggest the design of the phonological system is partly amodal.
Collapse
Affiliation(s)
- Iris Berent
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States of America.
| | | | | |
Collapse
|
37
|
Brentari D, Coppola M. What sign language creation teaches us about language. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2012; 4:201-211. [PMID: 26304196 DOI: 10.1002/wcs.1212] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, IL, USA
| | - Marie Coppola
- Department of Psychology, University of Connecticut, Storrs, CT, USA.,Department of Linguistics, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
38
|
Hunsicker D, Goldin-Meadow S. Hierarchical structure in a self-created communication system: Building nominal constituents in homesign. LANGUAGE 2012; 88:732-763. [PMID: 23626381 PMCID: PMC3633571 DOI: 10.1353/lan.2012.0092] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Deaf children whose hearing losses are so severe that they cannot acquire spoken language and whose hearing parents have not exposed them to sign language nevertheless use gestures, called homesigns, to communicate. Homesigners have been shown to refer to entities by pointing at that entity (a demonstrative, that). They also use iconic gestures and category points that refer, not to a particular entity, but to its class (a noun, bird). We used longitudinal data from a homesigner called David to test the hypothesis that these different types of gestures are combined to form larger, multi-gesture nominal constituents (that bird). We verified this hypothesis by showing that David's multi-gesture combinations served the same semantic and syntactic functions as demonstrative gestures or noun gestures used on their own. In other words, the larger unit substituted for the smaller units and, in this way, functioned as a nominal constituent. Children are thus able to refer to entities using multi-gesture units that contain both nouns and demonstratives, even when they do not have a conventional language to provide a model for this type of hierarchical constituent structure.
Collapse
|
39
|
Cormier K, Quinto-Pozos D, Sevcikova Z, Schembri A. Lexicalisation and de-lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. LANGUAGE & COMMUNICATION 2012; 32:329-348. [PMID: 23805017 PMCID: PMC3688355 DOI: 10.1016/j.langcom.2012.09.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we compare so-called "classifier" constructions in signed languages (which we refer to as "depicting constructions") with comparable iconic gestures produced by non-signers. We show clear correspondences between entity constructions and observer viewpoint gestures on the one hand, and handling constructions and character viewpoint gestures on the other. Such correspondences help account for both lexicalisation and de-lexicalisation processes in signed languages and how these processes are influenced by viewpoint. Understanding these processes is crucial when coding and annotating natural sign language data.
Collapse
Affiliation(s)
- Kearsy Cormier
- Deafness, Cognition & Language Research Centre, University College London, UK
| | | | - Zed Sevcikova
- Deafness, Cognition & Language Research Centre, University College London, UK
| | | |
Collapse
|
40
|
Berent I, Vaknin-Nusbaum V, Balaban E, Galaburda AM. Dyslexia impairs speech recognition but can spare phonological competence. PLoS One 2012; 7:e44875. [PMID: 23028654 PMCID: PMC3447000 DOI: 10.1371/journal.pone.0044875] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2012] [Accepted: 08/09/2012] [Indexed: 11/19/2022] Open
Abstract
Dyslexia is associated with numerous deficits to speech processing. Accordingly, a large literature asserts that dyslexics manifest a phonological deficit. Few studies, however, have assessed the phonological grammar of dyslexics, and none has distinguished a phonological deficit from a phonetic impairment. Here, we show that these two sources can be dissociated. Three experiments demonstrate that a group of adult dyslexics studied here is impaired in phonetic discrimination (e.g., ba vs. pa), and their deficit compromises even the basic ability to identify acoustic stimuli as human speech. Remarkably, the ability of these individuals to generalize grammatical phonological rules is intact. Like typical readers, these Hebrew-speaking dyslexics identified ill-formed AAB stems (e.g., titug) as less wordlike than well-formed ABB controls (e.g., gitut), and both groups automatically extended this rule to nonspeech stimuli, irrespective of reading ability. The contrast between the phonetic and phonological capacities of these individuals demonstrates that the algebraic engine that generates phonological patterns is distinct from the phonetic interface that implements them. While dyslexia compromises the phonetic system, certain core aspects of the phonological grammar can be spared.
Collapse
Affiliation(s)
- Iris Berent
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States of America.
| | | | | | | |
Collapse
|
41
|
Abstract
When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Department of Psychology, University of Chicago, Chicago, Illinois 60637, USA.
| | | |
Collapse
|
42
|
Sandler W. THE PHONOLOGICAL ORGANIZATION OF SIGN LANGUAGES. LANGUAGE AND LINGUISTICS COMPASS 2012; 6:162-182. [PMID: 23539295 PMCID: PMC3608481 DOI: 10.1002/lnc3.326] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Visually perceivable and movable parts of the body - the hands, facial features, head, and upper body - are the articulators of sign language. It is through these articulators that that words are formed, constrained, and contrasted with one another, and that prosody is conveyed. This article provides an overview of the way in which phonology is organized in the alternative modality of sign language.
Collapse
|
43
|
Berent I, Balaban E, Vaknin-Nusbaum V. How linguistic chickens help spot spoken-eggs: phonological constraints on speech identification. Front Psychol 2011; 2:182. [PMID: 21949509 PMCID: PMC3171785 DOI: 10.3389/fpsyg.2011.00182] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2011] [Accepted: 07/19/2011] [Indexed: 11/25/2022] Open
Abstract
It has long been known that the identification of aural stimuli as speech is context-dependent (Remez et al., 1981). Here, we demonstrate that the discrimination of speech stimuli from their non-speech transforms is further modulated by their linguistic structure. We gauge the effect of phonological structure on discrimination across different manifestations of well-formedness in two distinct languages. One case examines the restrictions on English syllables (e.g., the well-formed melif vs. ill-formed mlif); another investigates the constraints on Hebrew stems by comparing ill-formed AAB stems (e.g., TiTuG) with well-formed ABB and ABC controls (e.g., GiTuT, MiGuS). In both cases, non-speech stimuli that conform to well-formed structures are harder to discriminate from speech than stimuli that conform to ill-formed structures. Auxiliary experiments rule out alternative acoustic explanations for this phenomenon. In English, we show that acoustic manipulations that mimic the mlif–melif contrast do not impair the classification of non-speech stimuli whose structure is well-formed (i.e., disyllables with phonetically short vs. long tonic vowels). Similarly, non-speech stimuli that are ill-formed in Hebrew present no difficulties to English speakers. Thus, non-speech stimuli are harder to classify only when they are well-formed in the participants’ native language. We conclude that the classification of non-speech stimuli is modulated by their linguistic structure: inputs that support well-formed outputs are more readily classified as speech.
Collapse
Affiliation(s)
- Iris Berent
- Department of Psychology, Northeastern University Boston, MA, USA
| | | | | |
Collapse
|
44
|
Morford JP, Carlson ML. Sign Perception and Recognition in Non-Native Signers of ASL. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2011; 7:149-168. [PMID: 21686080 PMCID: PMC3114635 DOI: 10.1080/15475441.2011.543393] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition.
Collapse
Affiliation(s)
- Jill P. Morford
- Department of Linguistics, University of New Mexico
- NSF Science of Learning Center on Visual Language and Visual Learning (VL2)
| | | |
Collapse
|
45
|
Goldin-Meadow S. Widening the Lens on Language Learning: Language Creation in Deaf Children and Adults in Nicaragua: Commentary on Senghas. Hum Dev 2010; 53:303-311. [PMID: 22476199 DOI: 10.1159/000321294] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|