1
|
McGarry ME, Midgley KJ, Holcomb PJ, Emmorey K. An ERP investigation of perceptual vs motoric iconicity in sign production. Neuropsychologia 2024; 203:108966. [PMID: 39098388 PMCID: PMC11462866 DOI: 10.1016/j.neuropsychologia.2024.108966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 07/06/2024] [Accepted: 08/01/2024] [Indexed: 08/06/2024]
Abstract
The type of form-meaning mapping for iconic signs can vary. For perceptually-iconic signs there is a correspondence between visual features of a referent (e.g., the beak of a bird) and the form of the sign (e.g., extended thumb and index finger at the mouth for the American Sign Language (ASL) sign BIRD). For motorically-iconic signs there is a correspondence between how an object is held/manipulated and the form of the sign (e.g., the ASL sign FLUTE depicts how a flute is played). Previous studies have found that iconic signs are retrieved faster in picture-naming tasks, but type of iconicity has not been manipulated. We conducted an ERP study in which deaf signers and a control group of English speakers named pictures that targeted perceptually-iconic, motorically-iconic, or non-iconic ASL signs. For signers (unlike the control group), naming latencies varied by iconicity type: perceptually-iconic < motorically-iconic < non-iconic signs. A reduction in the N400 amplitude was only found for the perceptually-iconic signs, compared to both non-iconic and motorically-iconic signs. No modulations of N400 amplitudes were observed for the control group. We suggest that this pattern of results arises because pictures eliciting perceptually-iconic signs can more effectively prime lexical access due to greater alignment between features of the picture and the semantic and phonological features of the sign. We speculate that naming latencies are facilitated for motorically-iconic signs due to later processes (e.g., faster phonological encoding via cascading activation from semantic features). Overall, the results indicate that type of iconicity plays role in sign production when elicited by picture-naming tasks.
Collapse
Affiliation(s)
- Meghan E McGarry
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA.
| |
Collapse
|
2
|
Punselie S, McLean B, Dingemanse M. The Anatomy of Iconicity: Cumulative Structural Analogies Underlie Objective and Subjective Measures of Iconicity. Open Mind (Camb) 2024; 8:1191-1212. [PMID: 39439590 PMCID: PMC11495960 DOI: 10.1162/opmi_a_00162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 08/08/2024] [Indexed: 10/25/2024] Open
Abstract
The vocabularies of natural languages harbour many instances of iconicity, where words show a perceived resemblance between aspects of form and meaning. An open challenge in this domain is how to reconcile different operationalizations of iconicity and link them to an empirically grounded theory. Here we combine three ways of looking at iconicity using a set of 239 iconic words from 5 spoken languages (Japanese, Korean, Semai, Siwu and Ewe). Data on guessing accuracy serves as a baseline measure of probable iconicity and provides variation that we seek to explain and predict using structure-mapping theory and iconicity ratings. We systematically trace a range of cross-linguistically attested form-meaning correspondences in the dataset, yielding a word-level measure of cumulative iconicity that we find to be highly predictive of guessing accuracy. In a rating study, we collect iconicity judgments for all words from 78 participants. The ratings are well-predicted by our measure of cumulative iconicity and also correlate strongly with guessing accuracy, showing that rating tasks offer a scalable method to measure iconicity. Triangulating the measures reveals how structure-mapping can help open the black box of experimental measures of iconicity. While none of the methods is perfect, taken together they provide a well-rounded way to approach the meaning and measurement of iconicity in natural language vocabulary.
Collapse
Affiliation(s)
| | - Bonnie McLean
- Department of Linguistics and Philology, Uppsala University
| | | |
Collapse
|
3
|
Aussems S, Devey Smith L, Kita S. Do 14-17-month-old infants use iconic speech and gesture cues to interpret word meanings?a). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:638-654. [PMID: 39051718 DOI: 10.1121/10.0027916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 06/25/2024] [Indexed: 07/27/2024]
Abstract
This experimental study investigated whether infants use iconicity in speech and gesture cues to interpret word meanings. Specifically, we tested infants' sensitivity to size sound symbolism and iconic gesture cues and asked whether combining these cues in a multimodal fashion would enhance infants' sensitivity in a superadditive manner. Thirty-six 14-17-month-old infants participated in a preferential looking task in which they heard a spoken nonword (e.g., "zudzud") while observing a small and large object (e.g., a small and large square). All infants were presented with an iconic cue for object size (small or large) (1) in the pitch of the spoken non-word (high vs low), (2) in gesture (small or large), or (3) congruently in pitch and gesture (e.g., a high pitch and small gesture indicating a small square). Infants did not show a preference for congruently sized objects in any iconic cue condition. Bayes factor analyses showed moderate to strong support for the null hypotheses. In conclusion, 14-17-month-old infants did not use iconic pitch cues, iconic gesture cues, or iconic multimodal cues (pitch and gesture) to associate speech sounds with their referents. These findings challenge theories that emphasize the role of iconicity in early language development.
Collapse
Affiliation(s)
- Suzanne Aussems
- Department of Psychology, University of Warwick, Coventry CV4 7AL, United Kingdom
| | - Lottie Devey Smith
- School of Education, University of Exeter, Exeter EX1 2LU, United Kingdom
| | - Sotaro Kita
- Department of Psychology, University of Warwick, Coventry CV4 7AL, United Kingdom
| |
Collapse
|
4
|
Motamedi Y, Murgiano M, Grzyb B, Gu Y, Kewenig V, Brieke R, Donnellan E, Marshall C, Wonnacott E, Perniss P, Vigliocco G. Language development beyond the here-and-now: Iconicity and displacement in child-directed communication. Child Dev 2024. [PMID: 38563146 DOI: 10.1111/cdev.14099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Most language use is displaced, referring to past, future, or hypothetical events, posing the challenge of how children learn what words refer to when the referent is not physically available. One possibility is that iconic cues that imagistically evoke properties of absent referents support learning when referents are displaced. In an audio-visual corpus of caregiver-child dyads, English-speaking caregivers interacted with their children (N = 71, 24-58 months) in contexts in which the objects talked about were either familiar or unfamiliar to the child, and either physically present or displaced. The analysis of the range of vocal, manual, and looking behaviors caregivers produced suggests that caregivers used iconic cues especially in displaced contexts and for unfamiliar objects, using other cues when objects were present.
Collapse
Affiliation(s)
- Yasamin Motamedi
- Department of Experimental Psychology, University College London, London, UK
| | - Margherita Murgiano
- Department of Experimental Psychology, University College London, London, UK
| | - Beata Grzyb
- Department of Experimental Psychology, University College London, London, UK
| | - Yan Gu
- Department of Experimental Psychology, University College London, London, UK
- Department of Psychology, University of Essex, Colchester, UK
| | - Viktor Kewenig
- Department of Experimental Psychology, University College London, London, UK
| | - Ricarda Brieke
- Department of Experimental Psychology, University College London, London, UK
| | - Ed Donnellan
- Department of Experimental Psychology, University College London, London, UK
| | - Chloe Marshall
- Institute of Education, University College London, London, UK
| | - Elizabeth Wonnacott
- Department of Language and Cognition, University College London, London, UK
- Department of Education, University of Oxford, Oxford, UK
| | | | - Gabriella Vigliocco
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
5
|
Winter B, Lupyan G, Perry LK, Dingemanse M, Perlman M. Iconicity ratings for 14,000+ English words. Behav Res Methods 2024; 56:1640-1655. [PMID: 37081237 DOI: 10.3758/s13428-023-02112-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK.
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Lynn K Perry
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Marcus Perlman
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK
| |
Collapse
|
6
|
Bradley C, Wilbur R. Visual Form and Event Semantics Predict Transitivity in Silent Gestures: Evidence for Compositionality. Cogn Sci 2023; 47:e13331. [PMID: 37635624 DOI: 10.1111/cogs.13331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/18/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023]
Abstract
Silent gesture is not considered to be linguistic, on par with spoken and sign languages. It is claimed that silent gestures, unlike language, represent events holistically, without compositional structure. However, recent research has demonstrated that gesturers use consistent strategies when representing objects and events, and that there are behavioral and clinically relevant limits on what form a gesture may take to effect a particular meaning. This systematicity challenges a holistic interpretation of silent gesture, which predicts that there should be no stable form-meaning correspondence across event representations. Here, we demonstrate to the contrary that untrained gesturers systematically manipulate the form of their gestures when representing events with and without a theme (e.g., Someone popped the balloon vs. Someone walked), that is, transitive and intransitive events. We elicited silent gestures and annotated them for manual features active in coding transitivity distinctions in sign languages. We trained linear support vector machines to make item-by-item transitivity predictions based on these features. Prediction accuracy was good across the entire dataset, thus demonstrating that systematicity in silent gesture can be explained with recourse to subunits. We argue that handshape features are constructs co-opted from cognitive systems subserving manual action production and comprehension for communicative purposes, which may integrate into the linguistic system of emerging sign languages. We further suggest that nonsigners tend to map event participants to each hand, a strategy found across genetically and geographically distinct sign languages, suggesting the strategy's cognitive foundation.
Collapse
Affiliation(s)
| | - Ronnie Wilbur
- Department of Linguistics, Purdue University
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
7
|
Dockendorff M, Schmitz L, Vesper C, Knoblich G. Understanding others' distal goals from proximal communicative actions. PLoS One 2023; 18:e0280265. [PMID: 36662700 PMCID: PMC9858010 DOI: 10.1371/journal.pone.0280265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 12/23/2022] [Indexed: 01/21/2023] Open
Abstract
Many social interactions require individuals to coordinate their actions and to inform each other about their goals. Often these goals concern an immediate (i.e., proximal) action, as when people give each other a brief handshake, but they sometimes also refer to a future (i.e. distal) action, as when football players perform a passing sequence. The present study investigates whether observers can derive information about such distal goals by relying on kinematic modulations of an actor's instrumental actions. In Experiment 1 participants were presented with animations of a box being moved at different velocities towards an apparent endpoint. The distal goal, however, was for the object to be moved past this endpoint, to one of two occluded target locations. Participants then selected the location which they considered the likely distal goal of the action. As predicted, participants were able to detect differences in movement velocity and, based on these differences, systematically mapped the movements to the two distal goal locations. Adding a distal goal led to more variation in the way participants mapped the observed movements onto different target locations. The results of Experiments 2 and 3 indicated that this cannot be explained by difficulties in perceptual discrimination. Rather, the increased variability likely reflects differences in interpreting the underlying connection between proximal communicative actions and distal goals. The present findings extend previous research on sensorimotor communication by demonstrating that communicative action modulations are not restricted to predicting proximal goals but can also be used to infer more distal goals.
Collapse
Affiliation(s)
- Martin Dockendorff
- Department of Cognitive Science, Central European University, Vienna, Austria
| | - Laura Schmitz
- Department of Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Cordula Vesper
- Department of Linguistics, Cognitive Science, and Semiotics, Aarhus University, Aarhus, Denmark
- Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Günther Knoblich
- Department of Cognitive Science, Central European University, Vienna, Austria
| |
Collapse
|
8
|
Fuks O. Infants' Use of Iconicity in the Early Periods of Sign/Spoken Word-Learning. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 28:21-31. [PMID: 36221905 DOI: 10.1093/deafed/enac035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 08/30/2022] [Accepted: 09/03/2022] [Indexed: 06/16/2023]
Abstract
The aim of this research was to analyze the use of iconicity during language acquisition of Israeli Sign language and spoken Hebrew. Two bilingual-bimodal infants were observed in a longitudinal study between the ages of 10-26 months. I analyzed infants' production of iconic words, signs, and gestures. The results showed that infants' use of vocal iconicity reached its peak between the ages of 16-20 months. The proportion of imagic iconic signs in the infants' lexicon was also high during that period. In contrast, the infants' use of iconic gestures gradually increased during the study period, as well as their co-production with lexical items. The results suggest that infants' use of lexical and gestural iconicity scaffold the learning of novel labels and fill the gap in their expressive repertoire. It was concluded that teachers/therapists should use iconicity and encourage their students to use it in pedagogical settings.
Collapse
|
9
|
Pyers JE, Emmorey K. The iconic motivation for the morphophonological distinction between noun-verb pairs in American Sign Language does not reflect common human construals of objects and actions. LANGUAGE AND COGNITION 2022; 14:622-644. [PMID: 36426211 PMCID: PMC9681175 DOI: 10.1017/langcog.2022.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
Collapse
Affiliation(s)
- Jennie E. Pyers
- Wellesley College, Psychology Department, Wellesley, MA, USA
| | - Karen Emmorey
- San Diego State University, School of Speech, Language and Hearing Sciences, San Diego, CA, USA
| |
Collapse
|
10
|
Motamedi Y, Wolters L, Schouwstra M, Kirby S. The Effects of Iconicity and Conventionalization on Word Order Preferences. Cogn Sci 2022; 46:e13203. [PMID: 36251421 PMCID: PMC9787421 DOI: 10.1111/cogs.13203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 08/12/2022] [Accepted: 09/04/2022] [Indexed: 12/30/2022]
Abstract
Of the six possible orderings of the three main constituents of language (subject, verb, and object), two-SOV and SVO-are predominant cross-linguistically. Previous research using the silent gesture paradigm in which hearing participants produce or respond to gestures without speech has shown that different factors such as reversibility, salience, and animacy can affect the preferences for different orders. Here, we test whether participants' preferences for orders that are conditioned on the semantics of the event change depending on (i) the iconicity of individual gestural elements and (ii) the prior knowledge of a conventional lexicon. Our findings demonstrate the same preference for semantically conditioned word order found in previous studies, specifically that SOV and SVO are preferred differentially for different types of events. We do not find that iconicity of individual gestures affects participants' ordering preferences; however, we do find that learning a lexicon leads to a stronger preference for SVO-like orders overall. Finally, we compare our findings from English speakers, using an SVO-dominant language, with data from speakers of an SOV-dominant language, Turkish. We find that, while learning a lexicon leads to an increase in SVO preference for both sets of participants, this effect is mediated by language background and event type, suggesting that an interplay of factors together determines preferences for different ordering patterns. Taken together, our results support a view of word order as a gradient phenomenon responding to multiple biases.
Collapse
Affiliation(s)
| | - Lucie Wolters
- Centre for Language EvolutionThe University of Edinburgh
| | | | - Simon Kirby
- Centre for Language EvolutionThe University of Edinburgh
| |
Collapse
|
11
|
Pleyer M, Lepic R, Hartmann S. Compositionality in Different Modalities: A View from Usage-Based Linguistics. INT J PRIMATOL 2022. [DOI: 10.1007/s10764-022-00330-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractThe field of linguistics concerns itself with understanding the human capacity for language. Compositionality is a key notion in this research tradition. Compositionality refers to the notion that the meaning of a complex linguistic unit is a function of the meanings of its constituent parts. However, the question as to whether compositionality is a defining feature of human language is a matter of debate: usage-based and constructionist approaches emphasize the pervasive role of idiomaticity in language, and argue that strict compositionality is the exception rather than the rule. We review the major discussion points on compositionality from a usage-based point of view, taking both spoken and signed languages into account. In addition, we discuss theories that aim at accounting for the emergence of compositional language through processes of cultural transmission as well as the debate of whether animal communication systems exhibit compositionality. We argue for a view that emphasizes the analyzability of complex linguistic units, providing a template for accounting for the multimodal nature of human language.
Collapse
|
12
|
Gappmayr P, Lieberman AM, Pyers J, Caselli NK. Do parents modify child-directed signing to emphasize iconicity? Front Psychol 2022; 13:920729. [PMID: 36092032 PMCID: PMC9453873 DOI: 10.3389/fpsyg.2022.920729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Iconic signs are overrepresented in the vocabularies of young deaf children, but it is unclear why. It is possible that iconic signs are easier for children to learn, but it is also possible that adults use iconic signs in child-directed signing in ways that make them more learnable, either by using them more often than less iconic signs or by lengthening them. We analyzed videos of naturalistic play sessions between parents and deaf children (n = 24 dyads) aged 9-60 months. To determine whether iconic signs are overrepresented during child-directed signing, we compared the iconicity of actual parent productions to the iconicity of simulated vocabularies designed to estimate chance levels of iconicity. For almost all dyads, parent sign types and tokens were not more iconic than the simulated vocabularies, suggesting that parents do not select more iconic signs during child-directed signing. To determine whether iconic signs are more likely to be lengthened, we ran a linear regression predicting sign duration, and found an interaction between age and iconicity: while parents of younger children produced non-iconic and iconic signs with similar durations, parents of older children produced non-iconic signs with shorter durations than iconic signs. Thus, parents sign more quickly with older children than younger children, and iconic signs appear to resist that reduction in sign length. It is possible that iconic signs are perceptually available longer, and their availability is a candidate hypothesis as to why iconic signs are overrepresented in children's vocabularies.
Collapse
Affiliation(s)
- Paris Gappmayr
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - Amy M. Lieberman
- Wheelock College of Education and Human Development, Boston University, Boston, MA, United States
| | - Jennie Pyers
- Department of Psychology, Wellesley College, Wellesley, MA, United States
| | - Naomi K. Caselli
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| |
Collapse
|
13
|
Miozzo M, Peressotti F. How the hand has shaped sign languages. Sci Rep 2022; 12:11980. [PMID: 35831441 PMCID: PMC9279340 DOI: 10.1038/s41598-022-15699-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/28/2022] [Indexed: 11/20/2022] Open
Abstract
In natural languages, biological constraints push toward cross-linguistic homogeneity while linguistic, cultural, and historical processes promote language diversification. Here, we investigated the effects of these opposing forces on the fingers and thumb configurations (handshapes) used in natural sign languages. We analyzed over 38,000 handshapes from 33 languages. In all languages, the handshape exhibited the same form of adaptation to biological constraints found in tasks for which the hand has naturally evolved (e.g., grasping). These results were not replicated in fingerspelling—another task where the handshape is used—thus revealing a signing-specific adaptation. We also showed that the handshape varies cross-linguistically under the effects of linguistic, cultural, and historical processes. Their effects could thus emerge even without departing from the demands of biological constraints. Handshape’s cross-linguistic variability consists in changes in the frequencies with which the most faithful handshapes to biological constraints appear in individual sign languages.
Collapse
Affiliation(s)
- Michele Miozzo
- Psychology Department, Columbia University, 1190 Amsterdam Av., New York, NY, 10027, USA.
| | - Francesca Peressotti
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione, University of Padua, Padua, Italy.,Neuroscience Center, University of Padua, Padua, Italy
| |
Collapse
|
14
|
Emerging Lexicon for Objects in Central Taurus Sign Language. LANGUAGES 2022. [DOI: 10.3390/languages7020118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper investigates object-based and action-based iconic strategies and combinations of them to refer to everyday objects in the lexicon of an emerging village sign language, namely Central Taurus Sign Language (CTSL) of Turkey. CTSL naturally emerged in the absence of an accessible language model within the last half century. It provides a vantage point for how languages emerge, because it is relatively young and its very first creators are still alive today. Participants from two successive age cohorts were tested in two studies: (1) CTSL signers viewed 26 everyday objects in isolation and labeled them to an addressee in a picture-naming task, and (2) CTSL signers viewed 16 everyday objects in isolation and labeled them to an addressee before they viewed the same objects in context being acted upon by a human agent in short video clips and described the event in the clips to a communicative partner. The overall results show that the CTSL signers equally favored object-based and action-based iconic strategies with no significant difference across cohorts in the implementation of iconic strategies in both studies. However, there were significant differences in the implementation of iconic strategies in response to objects presented in isolation vs. context. Additionally, the CTSL-2 signers produced significantly longer sign strings than the CTSL-1 signers when objects were presented in isolation and significantly more combinatorial sign strings than the CTSL-1 signers. When objects were presented in context, both cohorts produced significantly shorter sign strings and more single-sign strings in the overall responses. The CTSL-2 signers still produced significantly more combinatorial sign strings in context. The two studies together portray the type and combination of iconic strategies in isolation vs. context in the emerging lexicon of a language system in its initial stages.
Collapse
|
15
|
The effects of multiple linguistic variables on picture naming in American Sign Language. Behav Res Methods 2021; 54:2502-2521. [PMID: 34918219 DOI: 10.3758/s13428-021-01751-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/14/2021] [Indexed: 11/08/2022]
Abstract
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.
Collapse
|
16
|
Fitch A, Arunachalam S, Lieberman AM. Mapping Word to World in ASL: Evidence from a Human Simulation Paradigm. Cogn Sci 2021; 45:e13061. [PMID: 34861057 PMCID: PMC9365062 DOI: 10.1111/cogs.13061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 09/27/2021] [Accepted: 10/05/2021] [Indexed: 11/30/2022]
Abstract
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent-child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.
Collapse
Affiliation(s)
- Allison Fitch
- Deaf Education and Deaf Studies, Boston University.,Psychology, Rochester Institute of Technology
| | | | | |
Collapse
|
17
|
McGarry ME, Massa N, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Matching pictures and signs: An ERP study of the effects of iconic structural alignment in American sign language. Neuropsychologia 2021; 162:108051. [PMID: 34624260 DOI: 10.1016/j.neuropsychologia.2021.108051] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 07/28/2021] [Accepted: 10/02/2021] [Indexed: 10/20/2022]
Abstract
Event-related potentials (ERPs) were used to explore the effects of iconicity and structural visual alignment between a picture-prime and a sign-target in a picture-sign matching task in American Sign Language (ASL). Half the targets were iconic signs and were presented after a) a matching visually-aligned picture (e.g., the shape and location of the hands in the sign COW align with the depiction of a cow with visible horns), b) a matching visually-nonaligned picture (e.g., the cow's horns were not clearly shown), and c) a non-matching picture (e.g., a picture of a swing instead of a cow). The other half of the targets were filler signs. Trials in the matching condition were responded to faster than those in the non-matching condition and were associated with smaller N400 amplitudes in deaf ASL signers. These effects were also observed for hearing non-signers performing the same task with spoken-English targets. Trials where the picture-prime was aligned with the sign target were responded to faster than non-aligned trials and were associated with a reduced P3 amplitude rather than a reduced N400, suggesting that picture-sign alignment facilitated the decision process, rather than lexical access. These ERP and behavioral effects of alignment were found only for the ASL signers. The results indicate that iconicity effects on sign comprehension may reflect a task-dependent strategic use of iconicity, rather than facilitation of lexical access.
Collapse
Affiliation(s)
- Meghan E McGarry
- Joint Doctoral Program in Language and Communication Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Natasja Massa
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | - Megan Mott
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | | | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA.
| |
Collapse
|
18
|
Trettenbrein PC, Pendzich NK, Cramer JM, Steinbach M, Zaccarella E. Psycholinguistic norms for more than 300 lexical signs in German Sign Language (DGS). Behav Res Methods 2021; 53:1817-1832. [PMID: 33575986 PMCID: PMC8516755 DOI: 10.3758/s13428-020-01524-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2020] [Indexed: 02/06/2023]
Abstract
Sign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign's correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: https://doi.org/10.17605/OSF.IO/MZ8J4.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences, Stephanstraße 1a, Leipzig, 04103, Germany.
- International Max Planck Research School on Neuroscience of Communication: Structure, Function, & Plasticity (IMPRS NeuroCom), Stephanstraße 1a, Leipzig, 04103, Germany.
| | - Nina-Kristin Pendzich
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Jens-Michael Cramer
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Markus Steinbach
- SignLab, Department of German Philology, Georg-August-University, Käte-Hamburger-Weg 3, Göttingen, 37073, Germany
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive & Brain Sciences, Stephanstraße 1a, Leipzig, 04103, Germany
| |
Collapse
|
19
|
Murgiano M, Motamedi Y, Vigliocco G. Situating Language in the Real-World: The Role of Multimodal Iconicity and Indexicality. J Cogn 2021; 4:38. [PMID: 34514309 PMCID: PMC8396123 DOI: 10.5334/joc.113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/06/2020] [Indexed: 11/30/2022] Open
Abstract
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.
Collapse
|
20
|
Akita K. Phonation Types Matter in Sound Symbolism. Cogn Sci 2021; 45:e12982. [PMID: 34018216 PMCID: PMC8244085 DOI: 10.1111/cogs.12982] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 04/01/2021] [Accepted: 04/03/2021] [Indexed: 11/28/2022]
Abstract
Sound symbolism is a non-arbitrary correspondence between sound and meaning. The majority of studies on sound symbolism have focused on consonants and vowels, and the sound-symbolic properties of suprasegmentals, particularly phonation types, have been largely neglected. This study examines the size and shape symbolism of four phonation types: modal and creaky voices, falsetto, and whisper. Japanese speakers heard 12 novel words (e.g., /íbi/, /ápa/) pronounced with the four types of phonation and rated the size and roundedness/pointedness each of the 48 stimuli seemed to represent on seven-point scales. The results showed that phonation types as well as consonantal and vocalic features influenced the ratings. Creaky voice was associated with larger and more pointed images than modal voice, which was in turn associated with larger and more pointed images than whisper. Falsetto was also associated with roundedness but not with smallness. These results shed new light on the acoustic approaches to sound symbolism and suggest the significance of phonation types and other suprasegmental features in the phenomenon.
Collapse
Affiliation(s)
- Kimi Akita
- Department of English Linguistics, Graduate School of Humanities, Nagoya University
| |
Collapse
|
21
|
Sehyr ZS, Caselli N, Cohen-Goldberg AM, Emmorey K. The ASL-LEX 2.0 Project: A Database of Lexical and Phonological Properties for 2,723 Signs in American Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:263-277. [PMID: 33598676 PMCID: PMC7977685 DOI: 10.1093/deafed/enaa038] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 09/25/2020] [Accepted: 09/27/2020] [Indexed: 06/12/2023]
Abstract
ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency ("guessability") ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.
Collapse
|
22
|
Rombouts E, Maes B, Zink I. An investigation into the relationship between Quality of pantomime gestures and visuospatial skills. Augment Altern Commun 2020; 36:179-189. [PMID: 33043713 DOI: 10.1080/07434618.2020.1811760] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
Abstract
While children with developmental language disorder or Williams syndrome appear to use hand gestures to compensate for specific cognitive and communicative difficulties, they have different cognitive strength-weakness profiles. Their semantic and visuospatial skills potentially affect gesture quality such as iconicity. The present study focuses on untangling the unique contribution of these skills in the quality of gestures. An explicit gesture elicitation task was presented to 25 participants with developmental language disorder between 7 and 10 years of age, 25 age-matched peers with typical development, and 14 participants with Williams Syndrome (8-23 years). They gestured pictures of objects without using speech (pantomime). The iconicity, semantic richness, and representation technique of the pantomimes were coded. Participants' semantic association and visuospatial skills were formally assessed. Iconicity was slightly lower in individuals with Williams syndrome, which seems related to their visuospatial deficit. While semantic saliency was similar across participant groups, small differences in representation technique were found. Partial correlations showed that visuospatial skills and semantic skills were instrumental in producing clear pantomimes. These findings indicate that clinicians aiming to enhance individuals' natural iconic gestures should consider achieved iconicity, particularly in individuals with low visuospatial skills.
Collapse
Affiliation(s)
- Ellen Rombouts
- Department of Neurosciences, Experimental Otorinolaryngology, KU Leuven, Belgium
| | - Bea Maes
- Parenting and Special Education Research Group, KU Leuven, Belgium
| | - Inge Zink
- Department of Neurosciences, Experimental Otorinolaryngology, KU Leuven, Belgium
| |
Collapse
|
23
|
Schaller F, Lee B, Sehyr ZS, Farnady LO, Emmorey K. Cross-linguistic metaphor priming in ASL-English bilinguals: Effects of the Double Mapping Constraint. SIGN LANGUAGE AND LINGUISTICS 2020; 23:96-111. [PMID: 33994844 PMCID: PMC8115326 DOI: 10.1075/sll.00045.sch] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Meir's (2010) Double Mapping Constraint (DMC) states the use of iconic signs in metaphors is restricted to signs that preserve the structural correspondence between the articulators and the concrete source domain and between the concrete and metaphorical domains. We investigated ASL signers' comprehension of English metaphors whose translations complied with the DMC (Communication collapsed during the meeting) or violated the DMC (The acid ate the metal). Metaphors were preceded by the ASL translation of the English verb, an unrelated sign, or a still video. Participants made sensibility judgments. Response times (RTs) were faster for DMC-compliant sentences with verb primes compared to unrelated primes or the still baseline. RTs for DMC-violation sentences were longer when preceded by verb primes. We propose the structured iconicity of the ASL verbs primed the semantic features involved in the iconic mapping and these primed semantic features facilitated comprehension of DMC-compliant metaphors and slowed comprehension of DMC-violation metaphors.
Collapse
Affiliation(s)
- Franziska Schaller
- Experimental Neurolinguistics Group, Bielefeld University, Universitaetsstrasse 25, 33615 Bielefeld, Germany
- Cluster of Excellence “Cognitive Interaction Technology”, Bielefeld University, Inspiration 1, 33619 Bielefeld
| | - Brittany Lee
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, San Diego, CA 92129
| | - Zed Sevcikova Sehyr
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, San Diego, CA 92129
| | - Lucinda O’Grady Farnady
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, San Diego, CA 92129
| | - Karen Emmorey
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, San Diego, CA 92129
| |
Collapse
|
24
|
McGarry ME, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Picture-naming in American Sign Language: an electrophysiological study of the effects of iconicity and structured alignment. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:199-210. [PMID: 33732747 PMCID: PMC7959108 DOI: 10.1080/23273798.2020.1804601] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 07/25/2020] [Indexed: 06/12/2023]
Abstract
A picture-naming task and ERPs were used to investigate effects of iconicity and visual alignment between signs and pictures in American Sign Language (ASL). For iconic signs, half the pictures visually overlapped with phonological features of the sign (e.g., the fingers of CAT align with a picture of a cat with prominent whiskers), while half did not (whiskers are not shown). Iconic signs were produced numerically faster than non-iconic signs and were associated with larger N400 amplitudes, akin to concreteness effects. Pictures aligned with iconic signs were named faster than non-aligned pictures, and there was a reduction in N400 amplitude. No behavioral effects were observed for the control group (English speakers). We conclude that sensory-motoric semantic features are represented more robustly for iconic than non-iconic signs (eliciting a concreteness-like N400 effect) and visual overlap between pictures and the phonological form of iconic signs facilitates lexical retrieval (eliciting a reduced N400).
Collapse
Affiliation(s)
- Meghan E. McGarry
- Joint Doctoral Program in Language and Communication Disorders, San Diego State University and University of California, San Diego, San Diego, CA USA
| | - Megan Mott
- Department of Psychology, San Diego State University, San Diego, CA USA
| | | | | | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA USA
| |
Collapse
|
25
|
Abstract
Seeing an object is a natural source for learning about the object's configuration. We show that language can also shape our knowledge about visual objects. We investigated sign language that enables deaf individuals to communicate through hand movements with as much expressive power as any other natural language. A few signs represent objects in a specific orientation. Sign-language users (signers) recognized visual objects faster when oriented as in the sign, and this match in orientation elicited specific brain responses in signers, as measured by event-related potentials (ERPs). Further analyses suggested that signers' responsiveness to object orientation derived from changes in the visual object representations induced by the signs. Our results also show that language facilitates discrimination between objects of the same kind (e.g., different cars), an effect never reported before with spoken languages. By focusing on sign language we could better characterize the impact of language (a uniquely human ability) on object visual processing.
Collapse
|
26
|
Slonimska A, Özyürek A, Capirci O. The role of iconicity and simultaneity for efficient communication: The case of Italian Sign Language (LIS). Cognition 2020; 200:104246. [PMID: 32197151 DOI: 10.1016/j.cognition.2020.104246] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 02/09/2020] [Accepted: 02/22/2020] [Indexed: 11/17/2022]
Abstract
A fundamental assumption about language is that, regardless of language modality, it faces the linearization problem, i.e., an event that occurs simultaneously in the world has to be split in language to be organized on a temporal scale. However, the visual modality of signed languages allows its users not only to express meaning in a linear manner but also to use iconicity and multiple articulators together to encode information simultaneously. Accordingly, in cases when it is necessary to encode informatively rich events, signers can take advantage of simultaneous encoding in order to represent information about different referents and their actions simultaneously. This in turn would lead to more iconic and direct representation. Up to now, there has been no experimental study focusing on simultaneous encoding of information in signed languages and its possible advantage for efficient communication. In the present study, we assessed how many information units can be encoded simultaneously in Italian Sign Language (LIS) and whether the amount of simultaneously encoded information varies based on the amount of information that is required to be expressed. Twenty-three deaf adults participated in a director-matcher game in which they described 30 images of events that varied in amount of information they contained. Results revealed that as the information that had to be encoded increased, signers also increased use of multiple articulators to encode different information (i.e., kinematic simultaneity) and density of simultaneously encoded information in their production. Present findings show how the fundamental properties of signed languages, i.e., iconicity and simultaneity, are used for the purpose of efficient information encoding in Italian Sign Language (LIS).
Collapse
Affiliation(s)
- Anita Slonimska
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy, Via S. Martino della Battaglia, 44, 00185 Rome, RM, Italy; Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, the Netherlands.
| | - Asli Özyürek
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands.
| | - Olga Capirci
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy, Via S. Martino della Battaglia, 44, 00185 Rome, RM, Italy.
| |
Collapse
|
27
|
Ortega G, Özyürek A. Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behav Res Methods 2020; 52:51-67. [PMID: 30788798 PMCID: PMC7005091 DOI: 10.3758/s13428-019-01204-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An unprecedented number of empirical studies have shown that iconic gestures-those that mimic the sensorimotor attributes of a referent-contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture-meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture's mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
Collapse
Affiliation(s)
- Gerardo Ortega
- English Language and Applied Linguistics, University of Birmingham, Birmingham, UK.
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- Donders Institute for Brain Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Caselli NK, Pyers JE. Degree and not type of iconicity affects sign language vocabulary acquisition. J Exp Psychol Learn Mem Cogn 2020; 46:127-139. [PMID: 31094562 PMCID: PMC6858483 DOI: 10.1037/xlm0000713] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lexical iconicity-signs or words that resemble their meaning-is overrepresented in children's early vocabularies. Embodied theories of language acquisition predict that symbols are more learnable when they are grounded in a child's firsthand experiences. As such, pantomimic iconic signs, which use the signer's body to represent a body, might be more readily learned than other types of iconic signs. Alternatively, the structure mapping theory of iconicity predicts that learners are sensitive to the amount of overlap between form and meaning. In this exploratory study of early vocabulary development in American Sign Language (ASL), we asked whether type of iconicity predicts sign acquisition above and beyond degree of iconicity. We also controlled for concreteness and relevance to babies, two possible confounding factors. Highly concrete referents and concepts that are germane to babies may be amenable to iconic mappings. We reanalyzed a previously published set of ASL Communicative Development Inventory (CDI) reports from 58 deaf children learning ASL from their deaf parents (Anderson & Reilly, 2002). Pantomimic signs were more iconic than other types of iconic signs (perceptual, both pantomimic and perceptual, or arbitrary), but type of iconicity had no effect on acquisition. Children may not make use of the special status of pantomimic elements of signs. Their vocabularies are, however, shaped by degree of iconicity, which aligns with a structure mapping theory of iconicity, though other explanations are also compatible (e.g., iconicity in child-directed signing). Previously demonstrated effects of type of iconicity may be an artifact of the increased degree of iconicity among pantomimic signs. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
29
|
Pyers J, Senghas A. Lexical Iconicity is differentially favored under transmission in a new sign language: The effect of type of iconicity. SIGN LANGUAGE AND LINGUISTICS 2020; 23:73-95. [PMID: 33613090 PMCID: PMC7894619 DOI: 10.1075/sll.00044.pye] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Observations that iconicity diminishes over time in sign languages pose a puzzle--why should something so evidently useful and functional decrease? Using an archival dataset of signs elicited over 15 years from 4 first-cohort and 4 third-cohort signers of an emerging sign language (Nicaraguan Sign Language), we investigated changes in pantomimic (body-to-body) and perceptual (body-to-object) iconicity. We make three key observations: (1) there is greater variability in the signs produced by the first cohort compared to the third; (2) while both types of iconicity are evident, pantomimic iconicity is more prevalent than perceptual iconicity for both groups; and (3) across cohorts, pantomimic elements are dropped to a greater proportion than perceptual elements. The higher rate of pantomimic iconicity in the first-cohort lexicon reflects the usefulness of body-as-body mapping in language creation. Yet, its greater vulnerability to change over transmission suggests that it is less favored by children's language acquisition processes.
Collapse
|
30
|
Rudner M, Orfanidou E, Kästner L, Cardin V, Woll B, Capek CM, Rönnberg J. Neural Networks Supporting Phoneme Monitoring Are Modulated by Phonology but Not Lexicality or Iconicity: Evidence From British and Swedish Sign Language. Front Hum Neurosci 2019; 13:374. [PMID: 31695602 PMCID: PMC6817460 DOI: 10.3389/fnhum.2019.00374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/03/2019] [Indexed: 11/18/2022] Open
Abstract
Sign languages are natural languages in the visual domain. Because they lack a written form, they provide a sharper tool than spoken languages for investigating lexicality effects which may be confounded by orthographic processing. In a previous study, we showed that the neural networks supporting phoneme monitoring in deaf British Sign Language (BSL) users are modulated by phonology but not lexicality or iconicity. In the present study, we investigated whether this pattern generalizes to deaf Swedish Sign Language (SSL) users. British and SSLs have a largely overlapping phoneme inventory but are mutually unintelligible because lexical overlap is small. This is important because it means that even when signs lexicalized in BSL are unintelligible to users of SSL they are usually still phonologically acceptable. During fMRI scanning, deaf users of the two different sign languages monitored signs that were lexicalized in either one or both of those languages for phonologically contrastive elements. Neural activation patterns relating to different linguistic levels of processing were similar across SLs; in particular, we found no effect of lexicality, supporting the notion that apparent lexicality effects on sublexical processing of speech may be driven by orthographic strategies. As expected, we found an effect of phonology but not iconicity. Further, there was a difference in neural activation between the two groups in a motion-processing region of the left occipital cortex, possibly driven by cultural differences, such as education. Importantly, this difference was not modulated by the linguistic characteristics of the material, underscoring the robustness of the neural activation patterns relating to different linguistic levels of processing.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Eleni Orfanidou
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,School of Psychology, University of Crete, Rethymno, Greece
| | - Lena Kästner
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,Department of Philosophy, Saarland University, Saarbrücken, Germany
| | - Velia Cardin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Bencie Woll
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cheryl M Capek
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester, United Kingdom
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
31
|
Sehyr ZS, Emmorey K. The perceived mapping between form and meaning in American Sign Language depends on linguistic knowledge and task: evidence from iconicity and transparency judgments. LANGUAGE AND COGNITION 2019; 11:208-234. [PMID: 31798755 PMCID: PMC6886719 DOI: 10.1017/langcog.2019.18] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), 'perceived transparency' (transparency ratings of the guesses), and 'semantic potential' (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers' ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.
Collapse
|
32
|
Abstract
Sound symbolism refers to an association between phonemes and stimuli containing particular perceptual and/or semantic elements (e.g., objects of a certain size or shape). Some of the best-known examples include the mil/mal effect (Sapir, Journal of Experimental Psychology, 12, 225-239, 1929) and the maluma/takete effect (Köhler, 1929). Interest in this topic has been on the rise within psychology, and studies have demonstrated that sound symbolic effects are relevant for many facets of cognition, including language, action, memory, and categorization. Sound symbolism also provides a mechanism by which words' forms can have nonarbitrary, iconic relationships with their meanings. Although various proposals have been put forth for how phonetic features (both acoustic and articulatory) come to be associated with stimuli, there is as yet no generally agreed-upon explanation. We review five proposals: statistical co-occurrence between phonetic features and associated stimuli in the environment, a shared property among phonetic features and stimuli; neural factors; species-general, evolved associations; and patterns extracted from language. We identify a number of outstanding questions that need to be addressed on this topic and suggest next steps for the field.
Collapse
|
33
|
Evidence for a functional specialization of ventral anterior temporal lobe for language. Neuroimage 2018; 183:800-810. [DOI: 10.1016/j.neuroimage.2018.08.062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 07/25/2018] [Accepted: 08/25/2018] [Indexed: 11/17/2022] Open
|
34
|
Novack MA, Filippi CA, Goldin-Meadow S, Woodward AL. Actions speak louder than gestures when you are 2 years old. Dev Psychol 2018; 54:1809-1821. [PMID: 30234335 PMCID: PMC6152821 DOI: 10.1037/dev0000553] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Interpreting iconic gestures can be challenging for children. Here, we explore the features and functions of iconic gestures that make them more challenging for young children to interpret than instrumental actions. In Study 1, we show that 2.5-year-olds are able to glean size information from handshape in a simple gesture, although their performance is significantly worse than 4-year-olds'. Studies 2 to 4 explore the boundary conditions of 2.5-year-olds' gesture understanding. In Study 2, 2.5-year-old children have an easier time interpreting size information in hands that reach than in hands that gesture. In Study 3, we tease apart the perceptual features and functional objectives of reaches and gestures. We created a context in which an action has the perceptual features of a reach (extending the hand toward an object) but serves the function of a gesture (the object is behind a barrier and not obtainable; the hand thus functions to represent, rather than reach for, the object). In this context, children struggle to interpret size information in the hand, suggesting that gesture's representational function (rather than its perceptual features) is what makes it hard for young children to interpret. A distance control (Study 4) in which a person holds a box in gesture space (close to the body) demonstrates that children's difficulty interpreting static gesture cannot be attributed to the physical distance between a gesture and its referent. Together, these studies provide evidence that children's struggle to interpret iconic gesture may stem from its status as representational action. (PsycINFO Database Record
Collapse
Affiliation(s)
- Miriam A. Novack
- The University of Chicago, Chicago, IL
- Northwestern University, Evanston, IL
| | - Courtney A. Filippi
- The University of Chicago, Chicago, IL
- National Institute of Health, Bethesda, MD
| | | | | |
Collapse
|
35
|
Abstract
Metaphor abounds in both sign and spoken languages. However, in sign languages, languages in the visual-manual modality, metaphors work a bit differently than they do in spoken languages. In this paper we explore some of the ways in which metaphors in sign languages differ from metaphors in spoken languages. We address three differences: (a) Some metaphors are very common in spoken languages yet are infelicitous in sign languages; (b) Body-part terms are possible in very specific types of metaphors in sign languages, but are not so restricted in spoken languages; (c) Similes in some sign languages are dispreferred in predicative positions in which metaphors are fine, in contrast to spoken languages where both can appear in these environments. We argue that these differences can be explained by two seemingly unrelated principles: the Double Mapping Constraint (Meir, 2010), which accounts for the interaction between metaphor and iconicity in languages, and Croft's (2003) constraint regarding the autonomy and dependency of elements in metaphorical constructions. We further argue that the study of metaphor in the signed modality offers novel insights concerning the nature of metaphor in general, and the role of figurative speech in language.
Collapse
Affiliation(s)
- Irit Meir
- Department of Hebrew Language, University of Haifa, Haifa, Israel
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Ariel Cohen
- Department of Foreign Literatures and Linguistics, Ben-Gurion University of the Negev, Beersheba, Israel
| |
Collapse
|
36
|
Sulik J. Cognitive mechanisms for inferring the meaning of novel signals during symbolisation. PLoS One 2018; 13:e0189540. [PMID: 29337998 PMCID: PMC5770015 DOI: 10.1371/journal.pone.0189540] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 11/27/2017] [Indexed: 11/18/2022] Open
Abstract
As participants repeatedly interact using graphical signals (as in a game of Pictionary), the signals gradually shift from being iconic (or motivated) to being symbolic (or arbitrary). The aim here is to test experimentally whether this change in the form of the signal implies a concomitant shift in the inferential mechanisms needed to understand it. The results show that, during early, iconic stages, there is more reliance on creative inferential processes associated with insight problem solving, and that the recruitment of these cognitive mechanisms decreases over time. The variation in inferential mechanism is not predicted by the sign’s visual complexity or iconicity, but by its familiarity, and by the complexity of the relevant mental representations. The discussion explores implications for pragmatics, language evolution, and iconicity research.
Collapse
Affiliation(s)
- Justin Sulik
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Department of Psychology, Royal Holloway, University of London, London, United Kingdom
- * E-mail:
| |
Collapse
|
37
|
Ortega G. Iconicity and Sign Lexical Acquisition: A Review. Front Psychol 2017; 8:1280. [PMID: 28824480 PMCID: PMC5539242 DOI: 10.3389/fpsyg.2017.01280] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 07/13/2017] [Indexed: 11/30/2022] Open
Abstract
The study of iconicity, defined as the direct relationship between a linguistic form and its referent, has gained momentum in recent years across a wide range of disciplines. In the spoken modality, there is abundant evidence showing that iconicity is a key factor that facilitates language acquisition. However, when we look at sign languages, which excel in the prevalence of iconic structures, there is a more mixed picture, with some studies showing a positive effect and others showing a null or negative effect. In an attempt to reconcile the existing evidence the present review presents a critical overview of the literature on the acquisition of a sign language as first (L1) and second (L2) language and points at some factor that may be the source of disagreement. Regarding sign L1 acquisition, the contradicting findings may relate to iconicity being defined in a very broad sense when a more fine-grained operationalisation might reveal an effect in sign learning. Regarding sign L2 acquisition, evidence shows that there is a clear dissociation in the effect of iconicity in that it facilitates conceptual-semantic aspects of sign learning but hinders the acquisition of the exact phonological form of signs. It will be argued that when we consider the gradient nature of iconicity and that signs consist of a phonological form attached to a meaning we can discern how iconicity impacts sign learning in positive and negative ways.
Collapse
Affiliation(s)
- Gerardo Ortega
- Centre for Language Studies, Radboud UniversityNijmegen, Netherlands.,Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| |
Collapse
|
38
|
Abstract
In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts.
Collapse
|
39
|
Abstract
Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner (, ) observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative.
Collapse
|
40
|
Caselli NK, Pyers JE. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language. Psychol Sci 2017; 28:979-987. [PMID: 28557672 DOI: 10.1177/0956797617700498] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Collapse
|
41
|
Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures. Cogn Res Princ Implic 2017. [PMID: 28180179 DOI: 10.1186/s41235‐016‐0024‐5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complex relational structures (e.g., negative feedback). Moreover, over the course of their explanations, participants' gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.
Collapse
|
42
|
Cooperrider K, Gentner D, Goldin-Meadow S. Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2016; 1:28. [PMID: 28180179 PMCID: PMC5256459 DOI: 10.1186/s41235-016-0024-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 10/27/2016] [Indexed: 11/10/2022]
Abstract
How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complex relational structures (e.g., negative feedback). Moreover, over the course of their explanations, participants’ gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.
Collapse
Affiliation(s)
- Kensy Cooperrider
- Department of Psychology, University of Chicago, 5848 S. University Avenue, Chicago, IL 60637 USA
| | - Dedre Gentner
- Department of Psychology, Northwestern University, 2029 Sheridan Road, Evanston, IL 60208 USA
| | - Susan Goldin-Meadow
- Department of Psychology, University of Chicago, 5848 S. University Avenue, Chicago, IL 60637 USA
| |
Collapse
|
43
|
Dingemanse M, Blasi DE, Lupyan G, Christiansen MH, Monaghan P. Arbitrariness, Iconicity, and Systematicity in Language. Trends Cogn Sci 2016; 19:603-615. [PMID: 26412098 DOI: 10.1016/j.tics.2015.07.013] [Citation(s) in RCA: 159] [Impact Index Per Article: 19.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2015] [Revised: 07/20/2015] [Accepted: 07/30/2015] [Indexed: 10/23/2022]
Abstract
The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure.
Collapse
Affiliation(s)
- Mark Dingemanse
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Damián E Blasi
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany; Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Gary Lupyan
- University of Wisconsin-Madison, Madison, WI, USA
| | - Morten H Christiansen
- Cornell University, Ithaca, NY, USA; University of Southern Denmark, Odense, Denmark
| | | |
Collapse
|
44
|
Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proc Natl Acad Sci U S A 2015; 112:5968-73. [PMID: 25918419 DOI: 10.1073/pnas.1423080112] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.
Collapse
|
45
|
Vigliocco G, Perniss P, Vinson D. Language as a multimodal phenomenon: implications for language learning, processing and evolution. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130292. [PMID: 25092660 DOI: 10.1098/rstb.2013.0292] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages.
Collapse
Affiliation(s)
- Gabriella Vigliocco
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London WC1H 0AP Deafness, Cognition & Language Research Centre, 49 Gordon Square, London WC1H 0PD
| | - Pamela Perniss
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London WC1H 0AP Deafness, Cognition & Language Research Centre, 49 Gordon Square, London WC1H 0PD
| | - David Vinson
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London WC1H 0AP
| |
Collapse
|
46
|
Perniss P, Vigliocco G. The bridge of iconicity: from a world of experience to the experience of language. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130300. [PMID: 25092668 PMCID: PMC4123679 DOI: 10.1098/rstb.2013.0300] [Citation(s) in RCA: 143] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.
Collapse
Affiliation(s)
- Pamela Perniss
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London, WC1H 0AP, UK Deafness, Cognition & Language Research Centre, 49 Gordon Square, London, WC1H 0PD, UK
| | - Gabriella Vigliocco
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London, WC1H 0AP, UK Deafness, Cognition & Language Research Centre, 49 Gordon Square, London, WC1H 0PD, UK
| |
Collapse
|