1
|
Özçalışkan Ş, Lucero C, Goldin-Meadow S. Is vision necessary for the timely acquisition of language-specific patterns in co-speech gesture and their lack in silent gesture? Dev Sci 2024; 27:e13507. [PMID: 38629500 DOI: 10.1111/desc.13507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 01/28/2024] [Accepted: 03/11/2024] [Indexed: 08/20/2024]
Abstract
Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture-gesturing with speech-but not in silent gesture-gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.
Collapse
Affiliation(s)
- Şeyda Özçalışkan
- Department of Psychology, Georgia State University, Atlanta, Georgia, USA
| | - Ché Lucero
- Department of Psychology, Cornell University, Ithaca, New York, USA
- Department of Psychology, SPARK Neuro, Inc, New York, New York, USA
| | | |
Collapse
|
2
|
Ter Bekke M, Levinson SC, van Otterdijk L, Kühn M, Holler J. Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition 2024; 248:105806. [PMID: 38749291 DOI: 10.1016/j.cognition.2024.105806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 03/04/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024]
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | | | - Lina van Otterdijk
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Michelle Kühn
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| |
Collapse
|
3
|
Hickok G, Venezia J, Teghipco A. Beyond Broca: neural architecture and evolution of a dual motor speech coordination system. Brain 2023; 146:1775-1790. [PMID: 36746488 PMCID: PMC10411947 DOI: 10.1093/brain/awac454] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/04/2022] [Accepted: 11/19/2022] [Indexed: 02/08/2023] Open
Abstract
Classical neural architecture models of speech production propose a single system centred on Broca's area coordinating all the vocal articulators from lips to larynx. Modern evidence has challenged both the idea that Broca's area is involved in motor speech coordination and that there is only one coordination network. Drawing on a wide range of evidence, here we propose a dual speech coordination model in which laryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically organized dorsolateral system while supralaryngeal articulation at the phonetic/syllabic level is coordinated by a more ventral system posterior to Broca's area. We argue further that these two speech production subsystems have distinguishable evolutionary histories and discuss the implications for models of language evolution.
Collapse
Affiliation(s)
- Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA 92697, USA
- Department of Language Science, University of California, Irvine, CA 92697, USA
| | - Jonathan Venezia
- Auditory Research Laboratory, VA Loma Linda Healthcare System, Loma Linda, CA 92357, USA
- Department of Otolaryngology—Head and Neck Surgery, Loma Linda University School of Medicine, Loma Linda, CA 92350, USA
| | - Alex Teghipco
- Department of Psychology, University of South Carolina, Columbia, SC 29208, USA
| |
Collapse
|
4
|
Berent I, Gervain J. Speakers aren't blank slates (with respect to sign-language phonology)! Cognition 2023; 232:105347. [PMID: 36528980 DOI: 10.1016/j.cognition.2022.105347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 09/18/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
A large literature has gauged the linguistic knowledge of signers by comparing sign-processing by signers and non-signers. Underlying this approach is the assumption that non-signers are devoid of any relevant linguistic knowledge, and as such, they present appropriate non-linguistic controls-a recent paper by Meade et al. (2022) articulates this view explicitly. Our commentary revisits this position. Informed by recent findings from adults and infants, we argue that the phonological system is partly amodal. We show that hearing infants use a shared brain network to extract phonological rules from speech and sign. Moreover, adult speakers who are sign-naïve demonstrably project knowledge of their spoken L1 to signs. So, when it comes to sign-language phonology, speakers are not linguistic blank slates. Disregarding this possibility could systematically underestimate the linguistic knowledge of signers and obscure the nature of the language faculty.
Collapse
Affiliation(s)
| | - Judit Gervain
- INCC, CNRS & Université Paris Cité, Paris, France; DPSS, University of Padua, Italy
| |
Collapse
|
5
|
Holler J. Visual bodily signals as core devices for coordinating minds in interaction. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210094. [PMID: 35876208 PMCID: PMC9310176 DOI: 10.1098/rstb.2021.0094] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Judith Holler
- Max-Planck-Institut für Psycholinguistik, Nijmegen, The Netherlands
- Donders Centre for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Rasenberg M, Özyürek A, Bögels S, Dingemanse M. The Primacy of Multimodal Alignment in Converging on Shared Symbols for Novel Referents. DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2021.1992235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Marlou Rasenberg
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Sara Bögels
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Department of Communication and Cognition, Tilburg University
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| |
Collapse
|
7
|
Van den Bossche C, Wolf D, Rekittke LM, Mittelberg I, Mathiak K. Judgmental perception of co-speech gestures in MDD. J Affect Disord 2021; 291:46-56. [PMID: 34023747 DOI: 10.1016/j.jad.2021.04.085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 02/06/2021] [Accepted: 04/25/2021] [Indexed: 01/10/2023]
Abstract
Cognitive bias in depression may increase sensitivity to judgmental appraisal of communicative cues. Nonverbal communication encompassing co-speech gestures is crucial for social functioning and is perceived differentially by men and women, however, little is known about the effect of depression on the perception of appraisal. We investigate if a cognitive bias influences the perception of appraisal and judgement of nonverbal communication in major depressive disorder (MDD). During watching videos of speakers retelling a story and gesticulating, 22 patients with MDD and 22 matched healthy controls pressed a button when they perceived the speaker as appraising in a positive or negative way. The speakers were presented in four different conditions (with and without speech and with natural speaker or as stick-figures) to evaluate context effects. Inter-subject covariance (ISC) of the button-press time series measured consistency across the groups of the response pattern depending on the factors diagnosis and gender. Significant effects emerged for the factors diagnosis (p = .002), gender (p = .007), and their interaction (p < .001). The female healthy controls perceived the gestures more consistently appraising than male controls, the female patients, and male patients whereas the latter three groups did not differ. Further, the ISC measure for consistency correlated negatively with depression severity. The natural speaker video without audio speech yielded the highest responses consistency. Indeed co-speech gestures may drive these ISC effects because number of gestures but not facial shrugs correlated with ISC amplitude. During co-speech gestures, a cognitive bias led to disturbed perception of appraisal in MDD for females. Social communication is critical for functional outcomes in mental disorders; thus perception of gestural communication is important in rehabilitation.
Collapse
Affiliation(s)
| | - Dhana Wolf
- Dept. Psychiatry, Psychosomatik and Psychosomatics, RWTH Aachen University
| | | | - Irene Mittelberg
- Dept. Linguistics and Cognitive Semiotics, RWTH Aachen University
| | - Klaus Mathiak
- Dept. Psychiatry, Psychosomatik and Psychosomatics, RWTH Aachen University; Translational Brain Research, Jülich Aachen Research Alliance.
| |
Collapse
|
8
|
Luna S, Joubert S, Blondel M, Cecchetto C, Gagné JP. The Impact of Aging on Spatial Abilities in Deaf Users of a Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:230-240. [PMID: 33221919 DOI: 10.1093/deafed/enaa034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 09/21/2020] [Accepted: 09/22/2020] [Indexed: 06/11/2023]
Abstract
Research involving the general population of people who use a spoken language to communicate has demonstrated that older adults experience cognitive and physical changes associated with aging. Notwithstanding the differences in the cognitive processes involved in sign and spoken languages, it is possible that aging can also affect cognitive processing in deaf signers. This research aims to explore the impact of aging on spatial abilities among sign language users. Results showed that younger signers were more accurate than older signers on all spatial tasks. Therefore, the age-related impact on spatial abilities found in the older hearing population can be generalized to the population of signers. Potential implications for sign language production and comprehension are discussed.
Collapse
Affiliation(s)
- Stéphanie Luna
- Faculty of Medicine, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| | - Sven Joubert
- Department of Psychology, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| | - Marion Blondel
- Centre National de Recherche Scientifique, Structures Formelles du Langage, Université Paris 8
| | - Carlo Cecchetto
- Centre National de Recherche Scientifique, Structures Formelles du Langage, Université Paris 8
- Departement of Psychology, University of Milan-Bicocca
| | - Jean-Pierre Gagné
- Faculty of Medicine, Université de Montréal
- Centre de recherche de l'Institut universitaire de gériatrie de Montréal
| |
Collapse
|
9
|
Ortega G, Özyürek A. Systematic mappings between semantic categories and types of iconic representations in the manual modality: A normed database of silent gesture. Behav Res Methods 2020; 52:51-67. [PMID: 30788798 PMCID: PMC7005091 DOI: 10.3758/s13428-019-01204-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An unprecedented number of empirical studies have shown that iconic gestures-those that mimic the sensorimotor attributes of a referent-contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; Müller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture-meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture's mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.
Collapse
Affiliation(s)
- Gerardo Ortega
- English Language and Applied Linguistics, University of Birmingham, Birmingham, UK.
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- Donders Institute for Brain Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Macuch Silva V, Holler J, Ozyurek A, Roberts SG. Multimodality and the origin of a novel communication system in face-to-face interaction. ROYAL SOCIETY OPEN SCIENCE 2020; 7:182056. [PMID: 32218922 PMCID: PMC7029942 DOI: 10.1098/rsos.182056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 11/27/2019] [Indexed: 05/05/2023]
Abstract
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
Collapse
Affiliation(s)
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Asli Ozyurek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Seán G. Roberts
- Department of Archaeology and Anthropology (excd.lab), University of Bristol, Bristol, UK
| |
Collapse
|
11
|
Abstract
People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words.
Collapse
Affiliation(s)
- Pierce Edmiston
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53703, USA
| | - Marcus Perlman
- Department of English Language and Applied Linguistics, University of Birmingham, Birmingham, UK
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53703, USA
| |
Collapse
|
12
|
Perlman M, Lupyan G. People Can Create Iconic Vocalizations to Communicate Various Meanings to Naïve Listeners. Sci Rep 2018; 8:2634. [PMID: 29422530 PMCID: PMC5805706 DOI: 10.1038/s41598-018-20961-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Accepted: 01/23/2018] [Indexed: 11/20/2022] Open
Abstract
The innovation of iconic gestures is essential to establishing the vocabularies of signed languages, but might iconicity also play a role in the origin of spoken words? Can people create novel vocalizations that are comprehensible to naïve listeners without prior convention? We launched a contest in which participants submitted non-linguistic vocalizations for 30 meanings spanning actions, humans, animals, inanimate objects, properties, quantifiers and demonstratives. The winner was determined by the ability of naïve listeners to infer the meanings of the vocalizations. We report a series of experiments and analyses that evaluated the vocalizations for: (1) comprehensibility to naïve listeners; (2) the degree to which they were iconic; (3) agreement between producers and listeners in iconicity; and (4) whether iconicity helps listeners learn the vocalizations as category labels. The results show contestants were able to create successful iconic vocalizations for most of the meanings, which were largely comprehensible to naïve listeners, and easier to learn as category labels. These findings demonstrate how iconic vocalizations can enable interlocutors to establish understanding in the absence of conventions. They suggest that, prior to the advent of full-blown spoken languages, people could have used iconic vocalizations to ground a spoken vocabulary with considerable semantic breadth.
Collapse
Affiliation(s)
- Marcus Perlman
- Department of English Language and Applied Linguistics, University of Birmingham, Birmingham, UK.
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, USA
| |
Collapse
|
13
|
Abstract
Human language, a signature of our species, derives its power from its links to human cognition. For centuries, scholars have been captivated by this link between language and cognition. In this article, we shift this focus. Adopting a developmental lens, we review recent evidence that sheds light on the origin and developmental unfolding of the link between language and cognition in the first year of life. This evidence, which reveals the joint contributions of infants' innate capacities and their sensitivity to experience, highlights how a precocious link between language and cognition advances infants beyond their initial perceptual and conceptual capacities. The evidence also identifies the conceptual advantages this link brings to human infants. By tracing the emergence of a language-cognition link in infancy, this article reveals a dynamic developmental cascade in infants' first year, with each developmental advance providing a foundation for subsequent advances.
Collapse
Affiliation(s)
- Danielle R Perszyk
- Department of Psychology, Northwestern University, Evanston, Illinois 60208; ,
| | - Sandra R Waxman
- Department of Psychology, Northwestern University, Evanston, Illinois 60208; ,
- Institute for Policy Research, Northwestern University, Evanston, Illinois 60208
| |
Collapse
|
14
|
Abstract
We suggest that one way to approach the evolution of language is through reverse engineering: asking what components of the language faculty could have been useful in the absence of the full complement of components. We explore the possibilities offered by linear grammar, a form of language that lacks syntax and morphology altogether, and that structures its utterances through a direct mapping between semantics and phonology. A language with a linear grammar would have no syntactic categories or syntactic phrases, and therefore no syntactic recursion. It would also have no functional categories such as tense, agreement, and case inflection, and no derivational morphology. Such a language would still be capable of conveying certain semantic relations through word order-for instance by stipulating that agents should precede patients. However, many other semantic relations would have to be based on pragmatics and discourse context. We find evidence of linear grammar in a wide range of linguistic phenomena: pidgins, stages of late second language acquisition, home signs, village sign languages, language comprehension (even in fully syntactic languages), aphasia, and specific language impairment. We also find a full-blown language, Riau Indonesian, whose grammar is arguably close to a pure linear grammar. In addition, when subjects are asked to convey information through nonlinguistic gesture, their gestures make use of semantically based principles of linear ordering. Finally, some pockets of English grammar, notably compounds, can be characterized in terms of linear grammar. We conclude that linear grammar is a plausible evolutionary precursor of modern fully syntactic grammar, one that is still active in the human mind.
Collapse
|
15
|
|