1
|
Rissman L, Horton L, Goldin-Meadow S. Universal Constraints on Linguistic Event Categories: A Cross-Cultural Study of Child Homesign. Psychol Sci 2023; 34:298-312. [PMID: 36608154 DOI: 10.1177/09567976221140328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Languages carve up conceptual space in varying ways-for example, English uses the verb cut both for cutting with a knife and for cutting with scissors, but other languages use distinct verbs for these events. We asked whether, despite this variability, there are universal constraints on how languages categorize events involving tools (e.g., knife-cutting). We analyzed descriptions of tool events from two groups: (a) 43 hearing adult speakers of English, Spanish, and Chinese and (b) 10 deaf child homesigners ages 3 to 11 (each of whom has created a gestural language without input from a conventional language model) in five different countries (Guatemala, Nicaragua, United States, Taiwan, Turkey). We found alignment across these two groups-events that elicited tool-prominent language among the spoken-language users also elicited tool-prominent language among the homesigners. These results suggest ways of conceptualizing tool events that are so prominent as to constitute a universal constraint on how events are categorized in language.
Collapse
Affiliation(s)
- Lilia Rissman
- Department of Psychology, University of Wisconsin-Madison
| | - Laura Horton
- Language Sciences Program, University of Wisconsin-Madison
| | - Susan Goldin-Meadow
- Department of Psychology, The University of Chicago.,Center for Gesture, Sign, and Language, The University of Chicago
| |
Collapse
|
2
|
Loos C, German A, Meier RP. Simultaneous structures in sign languages: Acquisition and emergence. Front Psychol 2022; 13:992589. [PMID: 36619119 PMCID: PMC9815181 DOI: 10.3389/fpsyg.2022.992589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
The visual-gestural modality affords its users simultaneous movement of several independent articulators and thus lends itself to simultaneous encoding of information. Much research has focused on the fact that sign languages coordinate two manual articulators in addition to a range of non-manual articulators to present different types of linguistic information simultaneously, from phonological contrasts to inflection, spatial relations, and information structure. Children and adults acquiring a signed language arguably thus need to comprehend and produce simultaneous structures to a greater extent than individuals acquiring a spoken language. In this paper, we discuss the simultaneous encoding that is found in emerging and established sign languages; we also discuss places where sign languages are unexpectedly sequential. We explore potential constraints on simultaneity in cognition and motor coordination that might impact the acquisition and use of simultaneous structures.
Collapse
Affiliation(s)
- Cornelia Loos
- Institute of German Sign Language and Communication of the Deaf, Universität Hamburg, Hamburg, Germany,*Correspondence: Cornelia Loos,
| | - Austin German
- Department of Linguistics, University of Texas at Austin, Austin, TX, United States
| | - Richard P. Meier
- Department of Linguistics, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
3
|
Slonimska A, Özyürek A, Capirci O. Simultaneity as an Emergent Property of Efficient Communication in Language: A Comparison of Silent Gesture and Sign Language. Cogn Sci 2022; 46:e13133. [PMID: 35613353 PMCID: PMC9287048 DOI: 10.1111/cogs.13133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 02/25/2022] [Accepted: 03/16/2022] [Indexed: 11/27/2022]
Abstract
Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality‐specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. (2020) with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality‐specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality‐specific adaptive capabilities.
Collapse
Affiliation(s)
- Anita Slonimska
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Radboud University
| | - Asli Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics, Radboud University.,Donders Centre for Cognition, Radboud University
| | - Olga Capirci
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy
| |
Collapse
|
4
|
The Seeds of the Noun–Verb Distinction in the Manual Modality: Improvisation and Interaction in the Emergence of Grammatical Categories. LANGUAGES 2022. [DOI: 10.3390/languages7020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The noun–verb distinction has long been considered a fundamental property of human language, and has been found in some form even in the earliest stages of language emergence, including homesign and the early generations of emerging sign languages. We present two experimental studies that use silent gesture to investigate how noun–verb distinctions develop in the manual modality through two key processes: (i) improvising using novel signals by individuals, and (ii) using those signals in the interaction between communicators. We operationalise communicative interaction in two ways: a setting in which members of the dyad were in separate booths and were given a comprehension test after each stimulus vs. a more naturalistic face-to-face conversation without comprehension checks. There were few differences between the two conditions, highlighting the robustness of the paradigm. Our findings from both experiments reflect patterns found in naturally emerging sign languages. Some formal distinctions arise in the earliest stages of improvisation and do not require interaction to develop. However, the full range of formal distinctions between nouns and verbs found in naturally emerging language did not appear with either improvisation or interaction, suggesting that transmitting the language to a new generation of learners might be necessary for these properties to emerge.
Collapse
|
5
|
Rissman L, Goldin-Meadow S. The Development of Causal Structure without a Language Model. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2017; 13:286-299. [PMID: 28983210 PMCID: PMC5624539 DOI: 10.1080/15475441.2016.1254633] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to the next in their production of causal language.
Collapse
Affiliation(s)
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago
- Center for Gesture, Sign, and Language, University of Chicago
| |
Collapse
|
6
|
Goldin-Meadow S, Brentari D. Gesture, sign, and language: The coming of age of sign language and gesture studies. Behav Brain Sci 2017; 40:e46. [PMID: 26434499 PMCID: PMC4821822 DOI: 10.1017/s0140525x15001247] [Citation(s) in RCA: 124] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Collapse
Affiliation(s)
- Susan Goldin-Meadow
- Departments of Psychology and Comparative Human Development,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://goldin-meadow-lab.uchicago.edu
| | - Diane Brentari
- Department of Linguistics,University of Chicago,Chicago,IL 60637;Center for Gesture, Sign, and Language,Chicago,IL ://signlanguagelab.uchicago.edu
| |
Collapse
|
7
|
Özçalışkan Ş, Lucero C, Goldin-Meadow S. Does language shape silent gesture? Cognition 2015; 148:10-8. [PMID: 26707427 DOI: 10.1016/j.cognition.2015.12.001] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2014] [Revised: 11/26/2015] [Accepted: 12/02/2015] [Indexed: 11/16/2022]
Abstract
Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N=20/per language) asked to describe physical motion events (e.g., running down a path)-a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech-co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own-silent gestures produced by English-speakers were identical in how motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.
Collapse
|
8
|
Horton L, Goldin-Meadow S, Coppola M, Senghas A, Brentari D. Forging a morphological system out of two dimensions: Agentivity and number. OPEN LINGUISTICS 2015; 1:596-613. [PMID: 26740937 PMCID: PMC4699575 DOI: 10.1515/opli-2015-0021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature - unpunctuated repetition - in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).
Collapse
Affiliation(s)
| | | | - M. Coppola
- University of Connecticut, Storrs, CT, 06269, USA
| | - A. Senghas
- Barnard College, New York, NY, 10027, USA
| | - D. Brentari
- University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
9
|
Clay Z, Pople S, Hood B, Kita S. Young Children Make Their Gestural Communication Systems More Language-Like: Segmentation and Linearization of Semantic Elements in Motion Events. Psychol Sci 2014; 25:1518-25. [DOI: 10.1177/0956797614533967] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2013] [Accepted: 03/31/2014] [Indexed: 11/15/2022] Open
Abstract
Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children’s learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system.
Collapse
Affiliation(s)
- Zanna Clay
- Institute of Biology, University of Neuchatel
| | - Sally Pople
- Adult Speech and Language Therapy Department, Royal Hampshire County Hospital, Winchester, United Kingdom
| | - Bruce Hood
- Department of Psychology, University of Bristol
| | - Sotaro Kita
- Department of Psychology, University of Warwick
| |
Collapse
|