1
|
Planer RJ. Memetics and the Parallel Architecture. Top Cogn Sci 2024. [PMID: 38728582 DOI: 10.1111/tops.12735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/26/2024] [Accepted: 04/29/2024] [Indexed: 05/12/2024]
Abstract
The evolution of human communication and culture is among the most significant-and challenging-questions we face in attempting to understand the evolution of our species. This article takes up two frameworks for theorizing about human communication and culture, namely, Jackendoff's Parallel Architecture of the human language faculty, and the cultural evolutionary framework of Memetics. The aim is to show that the two frameworks uniquely complement one another in some theoretically important ways. In particular, the Parallel Architecture's account of the lexicon significantly expands the range of linguistic phenomena that are plausibly covered by Memetics (e.g., from words to constructions and pure rules of syntax). At the same time, taking a "meme's-eye-view" of the lexicon retools the Parallel Architecture's treatment of the origins and subsequent cultural evolution of language.
Collapse
Affiliation(s)
- Ronald J Planer
- School of Liberal Arts, University of Wollongong
- Words, Bones, Genes, and Tools: DFG Center for Advanced Studies, University of Tübingen
| |
Collapse
|
2
|
Scott-Phillips T, Heintz C. Great ape interaction: Ladyginian but not Gricean. Proc Natl Acad Sci U S A 2023; 120:e2300243120. [PMID: 37824522 PMCID: PMC10589610 DOI: 10.1073/pnas.2300243120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023] Open
Abstract
Nonhuman great apes inform one another in ways that can seem very humanlike. Especially in the gestural domain, their behavior exhibits many similarities with human communication, meeting widely used empirical criteria for intentionality. At the same time, there remain some manifest differences, most obviously the enormous range and scope of human expression. How to account for these similarities and differences in a unified way remains a major challenge. Here, we make a key distinction between the expression of intentions (Ladyginian) and the expression of specifically informative intentions (Gricean), and we situate this distinction within a "special case of" framework for classifying different modes of attention manipulation. We hence describe how the attested tendencies of great ape interaction-for instance, to be dyadic rather than triadic, to be about the here-and-now rather than "displaced," and to have a high degree of perceptual resemblance between form and meaning-are products of its Ladyginian but not Gricean character. We also reinterpret video footage of great ape gesture as Ladyginian but not Gricean, and we distinguish several varieties of meaning that are continuous with one another. We conclude that the evolutionary origins of linguistic meaning lie not in gradual changes in communication systems, but rather in gradual changes in social cognition, and specifically in what modes of attention manipulation are enabled by a species' cognitive phenotype: first Ladyginian and in turn Gricean. The second of these shifts rendered humans, and only humans, "language ready."
Collapse
Affiliation(s)
- Thom Scott-Phillips
- Institute for Logic, Cognition, Language and Information, 20018Donostia-San Sebastian, Spain
| | - Christophe Heintz
- Department of Cognitive Science, Central European University, A-1100Vienna, Austria
| |
Collapse
|
3
|
Emmorey K. Ten things you should know about sign languages. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2023; 32:387-394. [PMID: 37829330 PMCID: PMC10568932 DOI: 10.1177/09637214231173071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The ten things you should know about sign languages are the following. 1) Sign languages have phonology and poetry. 2) Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production). 3) Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing. 4) Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing. 5) Deaf and hard-of-hearing children are at risk for language deprivation. 6) Signers gesture when signing. 7) Sign language experience enhances some visual-spatial skills. 8) The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language. 9) Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control. 10) The emergence of new sign languages reveals patterns of language creation and evolution. These discoveries reveal how language modality does and does not affect language structure, acquisition, processing, use, and representation in the brain. Sign languages provide unique insights into human language that cannot be obtained by studying spoken languages alone.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
4
|
Berent I, Gervain J. Speakers aren't blank slates (with respect to sign-language phonology)! Cognition 2023; 232:105347. [PMID: 36528980 DOI: 10.1016/j.cognition.2022.105347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/18/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
A large literature has gauged the linguistic knowledge of signers by comparing sign-processing by signers and non-signers. Underlying this approach is the assumption that non-signers are devoid of any relevant linguistic knowledge, and as such, they present appropriate non-linguistic controls-a recent paper by Meade et al. (2022) articulates this view explicitly. Our commentary revisits this position. Informed by recent findings from adults and infants, we argue that the phonological system is partly amodal. We show that hearing infants use a shared brain network to extract phonological rules from speech and sign. Moreover, adult speakers who are sign-naïve demonstrably project knowledge of their spoken L1 to signs. So, when it comes to sign-language phonology, speakers are not linguistic blank slates. Disregarding this possibility could systematically underestimate the linguistic knowledge of signers and obscure the nature of the language faculty.
Collapse
Affiliation(s)
| | - Judit Gervain
- INCC, CNRS & Université Paris Cité, Paris, France; DPSS, University of Padua, Italy
| |
Collapse
|
5
|
Bohn M, Schmidt LS, Schulze C, Frank MC, Tessler MH. Modeling Individual Differences in Children's Information Integration During Pragmatic Word Learning. Open Mind (Camb) 2022; 6:311-326. [PMID: 36993141 PMCID: PMC10042310 DOI: 10.1162/opmi_a_00069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 10/29/2022] [Indexed: 12/04/2022] Open
Abstract
Pragmatics is foundational to language use and learning. Computational cognitive models have been successfully used to predict pragmatic phenomena in adults and children - on an aggregate level. It is unclear if they can be used to predict behavior on an individual level. We address this question in children (N = 60, 3- to 5-year-olds), taking advantage of recent work on pragmatic cue integration. In Part 1, we use data from four independent tasks to estimate child-specific sensitivity parameters to three information sources: semantic knowledge, expectations about speaker informativeness, and sensitivity to common ground. In Part 2, we use these parameters to generate participant-specific trial-by-trial predictions for a new task that jointly manipulated all three information sources. The model accurately predicted children's behavior in the majority of trials. This work advances a substantive theory of individual differences in which the primary locus of developmental variation is sensitivity to individual information sources.
Collapse
Affiliation(s)
- Manuel Bohn
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Louisa S. Schmidt
- Leipzig Research Center for Early Child Development, Leipzig University, Leipzig, Germany
| | - Cornelia Schulze
- Leipzig Research Center for Early Child Development, Leipzig University, Leipzig, Germany
- Department of Educational Psychology, Faculty of Education, Leipzig University, Leipzig, Germany
| | | | - Michael Henry Tessler
- DeepMind, London, UK
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, USA
| |
Collapse
|
6
|
Evidence for compositionality in baboons (Papio papio) through the test case of negation. Sci Rep 2022; 12:19181. [PMID: 36357450 PMCID: PMC9649700 DOI: 10.1038/s41598-022-21143-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/22/2022] [Indexed: 11/12/2022] Open
Abstract
Can non-human animals combine abstract representations much like humans do with language? In particular, can they entertain a compositional representation such as 'not blue'? Across two experiments, we demonstrate that baboons (Papio papio) show a capacity for compositionality. Experiment 1 showed that baboons can entertain negative, compositional, representations: they can learn to associate a cue with iconically related referents (e.g., a blue patch referring to all blue objects), but also to the complement set associated with it (e.g., a blue patch referring to all non-blue objects). Strikingly, Experiment 2 showed that baboons not only learn to associate a cue with iconically related referents, but can learn to associate complex cues (composed of the same cue and an additional visual element) with the complement object set. Thus, they can learn an operation, instantiated by this additional visual element, that can be compositionally combined with previously learned cues. These results significantly reduce any claim that would make the manipulation and combination of abstract representations a solely human privilege.
Collapse
|
7
|
Bohn M, Liebal K, Oña L, Tessler MH. Great ape communication as contextual social inference: a computational modelling perspective. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210096. [PMID: 35876204 PMCID: PMC9310183 DOI: 10.1098/rstb.2021.0096] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 04/04/2022] [Indexed: 01/03/2023] Open
Abstract
Human communication has been described as a contextual social inference process. Research into great ape communication has been inspired by this view to look for the evolutionary roots of the social, cognitive and interactional processes involved in human communication. This approach has been highly productive, yet it is partly compromised by the widespread focus on how great apes use and understand individual signals. This paper introduces a computational model that formalizes great ape communication as a multi-faceted social inference process that integrates (a) information contained in the signals that make up an utterance, (b) the relationship between communicative partners and (c) the social context. This model makes accurate qualitative and quantitative predictions about real-world communicative interactions between semi-wild-living chimpanzees. When enriched with a pragmatic reasoning process, the model explains repeatedly reported differences between humans and great apes in the interpretation of ambiguous signals (e.g. pointing or iconic gestures). This approach has direct implications for observational and experimental studies of great ape communication and provides a new tool for theorizing about the evolution of uniquely human communication. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Manuel Bohn
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, 04103 Leipzig, Germany
| | - Katja Liebal
- Institute of Biology, Leipzig University, 04103 Leipzig, Germany
| | - Linda Oña
- Naturalistic Social Cognition Group, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Michael Henry Tessler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139-4307, USA
| |
Collapse
|
8
|
How and When to Sign “Hey!” Socialization into Grammar in Z, a 1st Generation Family Sign Language from Mexico. LANGUAGES 2022. [DOI: 10.3390/languages7020080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
“Z” is a young sign language developing in a family whose hearing members speak Tzotzil (Mayan). Three deaf siblings, together with an intervening hearing sister and a hearing niece, formed the original cohort of signing adults. A hearing son of the original signer became the first native signer of a second generation. Z provides evidence for a classic grammaticalization chain linking a sign requesting attention (HEY1) to a pragmatic turn-initiating particle (HEY2), which signals a new utterance or change of topic. Such an emergent grammatical particle linked to the pragmatic exigencies of communication is a primordial example of emergent grammar. The chapter presents the stages in the son’s language socialization and acquisition of HEY1 and HEY2, starting at 11 months, through his subsequent bilingual development in both Z and Tzotzil, jointly deploying other communicative modalities such as gaze and touch. It proposes a series of stages leading, by 4 years of age, to his understanding of the complex sequential structure that using the sign involves. Acquiring pragmatic signs such as HEY in Z demonstrates how the grammar of a language, including an emergent sign language, is built upon the practices of a language community and the basic expected parameters of local social life.
Collapse
|
9
|
Margiotoudi K, Bohn M, Schwob N, Taglialatela J, Pulvermüller F, Epping A, Schweller K, Allritz M. Bo-NO-bouba-kiki: picture-word mapping but no spontaneous sound symbolic speech-shape mapping in a language trained bonobo. Proc Biol Sci 2022; 289:20211717. [PMID: 35105236 PMCID: PMC8808101 DOI: 10.1098/rspb.2021.1717] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 01/04/2022] [Indexed: 12/11/2022] Open
Abstract
Humans share the ability to intuitively map 'sharp' or 'round' pseudowords, such as 'bouba' versus 'kiki', to abstract edgy versus round shapes, respectively. This effect, known as sound symbolism, appears early in human development. The phylogenetic origin of this phenomenon, however, is unclear: are humans the only species capable of experiencing correspondences between speech sounds and shapes, or could similar effects be observed in other animals? Thus far, evidence from an implicit matching experiment failed to find evidence of this sound symbolic matching in great apes, suggesting its human uniqueness. However, explicit tests of sound symbolism have never been conducted with nonhuman great apes. In the present study, a language-competent bonobo completed a cross-modal matching-to-sample task in which he was asked to match spoken English words to pictures, as well as 'sharp' or 'round' pseudowords to shapes. Sound symbolic trials were interspersed among English words. The bonobo matched English words to pictures with high accuracy, but did not show any evidence of spontaneous sound symbolic matching. Our results suggest that speech exposure/comprehension alone cannot explain sound symbolism. This lends plausibility to the hypothesis that biological differences between human and nonhuman primates could account for the putative human specificity of this effect.
Collapse
Affiliation(s)
- Konstantina Margiotoudi
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität Berlin, Berlin, Germany
- Laboratory of Cognitive Psychology, CNRS and Aix-Marseille University, Marseille, France
| | - Manuel Bohn
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Natalie Schwob
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Jared Taglialatela
- Ape Cognition and Conservation Initiative, Des Moines, IA, USA
- Department of Ecology, Evolution and Organismal Biology, Kennesaw State University, Kennesaw, GA, USA
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
- Cluster of Excellence ‘Matters of Activity’, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Amanda Epping
- Ape Cognition and Conservation Initiative, Des Moines, IA, USA
| | - Ken Schweller
- Ape Cognition and Conservation Initiative, Des Moines, IA, USA
| | - Matthias Allritz
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife KY16 9JP, UK
| |
Collapse
|
10
|
Abstract
Human expression is open-ended, versatile, and diverse, ranging from ordinary language use to painting, from exaggerated displays of affection to micro-movements that aid coordination. Here we present and defend the claim that this expressive diversity is united by an interrelated suite of cognitive capacities, the evolved functions of which are the expression and recognition of informative intentions. We describe how evolutionary dynamics normally leash communication to narrow domains of statistical mutual benefit, and how expression is unleashed in humans. The relevant cognitive capacities are cognitive adaptations to living in a partner choice social ecology; and they are, correspondingly, part of the ordinarily developing human cognitive phenotype, emerging early and reliably in ontogeny. In other words, we identify distinctive features of our species' social ecology to explain how and why humans, and only humans, evolved the cognitive capacities that, in turn, lead to massive diversity and open-endedness in means and modes of expression. Language use is but one of these modes of expression, albeit one of manifestly high importance. We make cross-species comparisons, describe how the relevant cognitive capacities can evolve in a gradual manner, and survey how unleashed expression facilitates not only language use, but also novel behaviour in many other domains too, focusing on the examples of joint action, teaching, punishment, and art, all of which are ubiquitous in human societies but relatively rare in other species. Much of this diversity derives from graded aspects of human expression, which can be used to satisfy informative intentions in creative and new ways. We aim to help reorient cognitive pragmatics, as a phenomenon that is not a supplement to linguistic communication and on the periphery of language science, but rather the foundation of the many of the most distinctive features of human behaviour, society, and culture.
Collapse
|
11
|
Flaherty M, Hunsicker D, Goldin-Meadow S. Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign. Cognition 2021; 211:104608. [PMID: 33581667 DOI: 10.1016/j.cognition.2021.104608] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 12/18/2020] [Accepted: 01/17/2021] [Indexed: 10/22/2022]
Abstract
Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.
Collapse
Affiliation(s)
- Molly Flaherty
- Davidson College, Psychology Department, Davidson, NC 28036, United States of America.
| | - Dea Hunsicker
- The University of Chicago, 5848 S. University Avenue, Chicago, IL 60637, United States of America
| | - Susan Goldin-Meadow
- The University of Chicago, 5848 S. University Avenue, Chicago, IL 60637, United States of America
| |
Collapse
|
12
|
Abstract
Natural sign languages of deaf communities are acquired on the same time scale as that of spoken languages if children have access to fluent signers providing input from birth. Infants are sensitive to linguistic information provided visually, and early milestones show many parallels. The modality may affect various areas of language acquisition; such effects include the form of signs (sign phonology), the potential advantage presented by visual iconicity, and the use of spatial locations to represent referents, locations, and movement events. Unfortunately, the vast majority of deaf children do not receive accessible linguistic input in infancy, and these children experience language deprivation. Negative effects on language are observed when first-language acquisition is delayed. For those who eventually begin to learn a sign language, earlier input is associated with better language and academic outcomes. Further research is especially needed with a broader diversity of participants.
Collapse
Affiliation(s)
- Diane Lillo-Martin
- Department of Linguistics, University of Connecticut, Storrs, Connecticut 06269-1145, USA
- Haskins Laboratories, New Haven, Connecticut 06511, USA
| | - Jonathan Henner
- Department of Specialized Education Services, University of North Carolina, Greensboro, North Carolina 27412, USA
| |
Collapse
|
13
|
Goldin‐Meadow S. Discovering the Biases Children Bring to Language Learning. CHILD DEVELOPMENT PERSPECTIVES 2020. [DOI: 10.1111/cdep.12379] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
14
|
Young children spontaneously recreate core properties of language in a new modality. Proc Natl Acad Sci U S A 2019; 116:26072-26077. [PMID: 31792169 DOI: 10.1073/pnas.1904871116] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How the world's 6,000+ natural languages have arisen is mostly unknown. Yet, new sign languages have emerged recently among deaf people brought together in a community, offering insights into the dynamics of language evolution. However, documenting the emergence of these languages has mostly consisted of studying the end product; the process by which ad hoc signs are transformed into a structured communication system has not been directly observed. Here we show how young children create new communication systems that exhibit core features of natural languages in less than 30 min. In a controlled setting, we blocked the possibility of using spoken language. In order to communicate novel messages, including abstract concepts, dyads of children spontaneously created novel gestural signs. Over usage, these signs became increasingly arbitrary and conventionalized. When confronted with the need to communicate more complex meanings, children began to grammatically structure their gestures. Together with previous work, these results suggest that children have the basic skills necessary, not only to acquire a natural language, but also to spontaneously create a new one. The speed with which children create these structured systems has profound implications for theorizing about language evolution, a process which is generally thought to span across many generations, if not millennia.
Collapse
|
15
|
Bohn M, Call J, Tomasello M. Natural reference: A phylo- and ontogenetic perspective on the comprehension of iconic gestures and vocalizations. Dev Sci 2018; 22:e12757. [PMID: 30267557 DOI: 10.1111/desc.12757] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Accepted: 09/20/2018] [Indexed: 11/27/2022]
Abstract
The recognition of iconic correspondence between signal and referent has been argued to bootstrap the acquisition and emergence of language. Here, we study the ontogeny, and to some extent the phylogeny, of the ability to spontaneously relate iconic signals, gestures, and/or vocalizations, to previous experience. Children at 18, 24, and 36 months of age (N = 216) and great apes (N = 13) interacted with two apparatuses, each comprising a distinct action and sound. Subsequently, an experimenter mimicked either the action, the sound, or both in combination to refer to one of the apparatuses. Experiments 1 and 2 found no spontaneous comprehension in great apes and in 18-month-old children. At 24 months of age, children were successful with a composite vocalization-gesture signal but not with either vocalization or gesture alone. At 36 months, children succeeded both with a composite vocalization-gesture signal and with gesture alone, but not with vocalization alone. In general, gestures were understood better compared to vocalizations. Experiment 4 showed that gestures were understood irrespective of how children learned about the corresponding action (through observation or self-experience). This pattern of results demonstrates that iconic signals can be a powerful way to establish reference in the absence of language, but they are not trivial for children to comprehend and not all iconic signals are created equal.
Collapse
Affiliation(s)
- Manuel Bohn
- Department of Psychology, Stanford University, Stanford, California.,Leipzig Research Center, for Early Child Development, Leipzig University, Leipzig, Germany
| | - Josep Call
- School of Psychology and Neuroscience, University of St. Andrews, St. Andrews, UK.,Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Michael Tomasello
- Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina
| |
Collapse
|
16
|
|