1
|
Smirnova A. Syntactic Variation in Reduced Registers Through the Lens of the Parallel Architecture. Top Cogn Sci 2024. [PMID: 38963921 DOI: 10.1111/tops.12747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 06/17/2024] [Accepted: 06/18/2024] [Indexed: 07/06/2024]
Abstract
Diversion from the syntactic norm, as manifested in the absence of otherwise expected lexical and syntactic material, has been extensively studied in theoretical syntax. Such modifications are observed in headlines, telegrams, labels, and other specialized contexts, collectively referred to as "reduced" registers. Focusing on search queries, a type of reduced register, I propose that they are generated by a simpler grammar that lacks a full-fledged syntactic component. The analysis is couched in the Parallel Architecture framework, whose assumption of relative independence of linguistic components-their parallelism-and the rejection of syntactocentrism are essential to explain properties of queries.
Collapse
Affiliation(s)
- Anastasia Smirnova
- Department of English Language and Literature, San Francisco State University
| |
Collapse
|
2
|
Planer RJ. Memetics and the Parallel Architecture. Top Cogn Sci 2024. [PMID: 38728582 DOI: 10.1111/tops.12735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/26/2024] [Accepted: 04/29/2024] [Indexed: 05/12/2024]
Abstract
The evolution of human communication and culture is among the most significant-and challenging-questions we face in attempting to understand the evolution of our species. This article takes up two frameworks for theorizing about human communication and culture, namely, Jackendoff's Parallel Architecture of the human language faculty, and the cultural evolutionary framework of Memetics. The aim is to show that the two frameworks uniquely complement one another in some theoretically important ways. In particular, the Parallel Architecture's account of the lexicon significantly expands the range of linguistic phenomena that are plausibly covered by Memetics (e.g., from words to constructions and pure rules of syntax). At the same time, taking a "meme's-eye-view" of the lexicon retools the Parallel Architecture's treatment of the origins and subsequent cultural evolution of language.
Collapse
Affiliation(s)
- Ronald J Planer
- School of Liberal Arts, University of Wollongong
- Words, Bones, Genes, and Tools: DFG Center for Advanced Studies, University of Tübingen
| |
Collapse
|
3
|
Mahowald K, Diachek E, Gibson E, Fedorenko E, Futrell R. Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages. Cognition 2023; 241:105543. [PMID: 37713956 DOI: 10.1016/j.cognition.2023.105543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 06/27/2023] [Accepted: 06/27/2023] [Indexed: 09/17/2023]
Abstract
Grammatical cues are sometimes redundant with word meanings in natural language. For instance, English word order rules constrain the word order of a sentence like "The dog chewed the bone" even though the status of "dog" as subject and "bone" as object can be inferred from world knowledge and plausibility. Quantifying how often this redundancy occurs, and how the level of redundancy varies across typologically diverse languages, can shed light on the function and evolution of grammar. To that end, we performed a behavioral experiment in English and Russian and a cross-linguistic computational analysis measuring the redundancy of grammatical cues in transitive clauses extracted from corpus text. English and Russian speakers (n = 484) were presented with subjects, verbs, and objects (in random order and with morphological markings removed) extracted from naturally occurring sentences and were asked to identify which noun is the subject of the action. Accuracy was high in both languages (∼89% in English, ∼87% in Russian). Next, we trained a neural network machine classifier on a similar task: predicting which nominal in a subject-verb-object triad is the subject. Across 30 languages from eight language families, performance was consistently high: a median accuracy of 87%, comparable to the accuracy observed in the human experiments. The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e.g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e.g., "The bone chewed the dog.").
Collapse
Affiliation(s)
- Kyle Mahowald
- The University of Texas at Austin, Linguistics, USA.
| | | | - Edward Gibson
- Massachusetts Institute of Technology, Brain and Cognitive Sciences, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Brain and Cognitive Sciences, USA; Massachusetts Institute of Technology, McGovern Institute for Brain Research, USA
| | | |
Collapse
|
4
|
Leroux M, Schel AM, Wilke C, Chandia B, Zuberbühler K, Slocombe KE, Townsend SW. Call combinations and compositional processing in wild chimpanzees. Nat Commun 2023; 14:2225. [PMID: 37142584 PMCID: PMC10160036 DOI: 10.1038/s41467-023-37816-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 03/31/2023] [Indexed: 05/06/2023] Open
Abstract
Through syntax, i.e., the combination of words into larger phrases, language can express a limitless number of messages. Data in great apes, our closest-living relatives, are central to the reconstruction of syntax's phylogenetic origins, yet are currently lacking. Here, we provide evidence for syntactic-like structuring in chimpanzee communication. Chimpanzees produce "alarm-huus" when surprised and "waa-barks" when potentially recruiting conspecifics during aggression or hunting. Anecdotal data suggested chimpanzees combine these calls specifically when encountering snakes. Using snake presentations, we confirm call combinations are produced when individuals encounter snakes and find that more individuals join the caller after hearing the combination. To test the meaning-bearing nature of the call combination, we use playbacks of artificially-constructed call combinations and both independent calls. Chimpanzees react most strongly to call combinations, showing longer looking responses, compared with both independent calls. We propose the "alarm-huu + waa-bark" represents a compositional syntactic-like structure, where the meaning of the call combination is derived from the meaning of its parts. Our work suggests that compositional structures may not have evolved de novo in the human lineage, but that the cognitive building-blocks facilitating syntax may have been present in our last common ancestor with chimpanzees.
Collapse
Affiliation(s)
- Maël Leroux
- Department of Comparative Language Science, University of Zürich, Zürich, Switzerland.
- Budongo Conservation Field Station, Masindi, Uganda.
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zürich, Zürich, Switzerland.
| | - Anne M Schel
- Animal Behaviour and Cognition, Utrecht University, Utrecht, Netherlands
| | - Claudia Wilke
- Department of Comparative Language Science, University of Zürich, Zürich, Switzerland
- Budongo Conservation Field Station, Masindi, Uganda
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zürich, Zürich, Switzerland
| | | | - Klaus Zuberbühler
- Budongo Conservation Field Station, Masindi, Uganda
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zürich, Zürich, Switzerland
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- School of Psychology and Neuroscience, University of St Andrews, St Andrews Scotland, UK
| | | | - Simon W Townsend
- Department of Comparative Language Science, University of Zürich, Zürich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zürich, Zürich, Switzerland
- Department of Psychology, University of Warwick, Coventry, UK
| |
Collapse
|
5
|
Bare and Constructional Compositionality. INT J PRIMATOL 2023. [DOI: 10.1007/s10764-022-00343-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
AbstractThis paper proposes a typology of compositionality as manifest in human language and animal communication. At the heart of the typology is a distinction between bare compositionality, in which the meaning of a complex expression is determined solely by the meanings of its constituents, and constructional compositionality, in which the meaning of a complex expression is determined by the meanings of its constituents and also by various aspects of its structure. Bare and constructional compositionality may be observed in human language as well as in various animal communication systems, including primates and birds. Architecturally, bare compositionality provides the foundations for constructional compositionality, while phylogenetically, bare compositionality is a potential starting point for the evolution of constructional compositionality in animal communication and human language.
Collapse
|
6
|
Bacelar Valente M. Do All Languages Share the Same Conceptual Structure? COGNITIVE SEMANTICS 2022; 8:159-180. [DOI: 10.1163/23526416-08020001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Abstract
In this work, we consider the views of three exponents of major areas of linguistics – Levelt (psycholinguistics), Jackendoff (theoretical linguistics), and Gil (field linguistics) – regarding the issue of the universality or not of the conceptual structure of languages. In Levelt’s view, during language production, the conceptual structure of the preverbal message is language-specific. In Jackendoff’s theoretical approach to language – his parallel architecture – there is a universal conceptual structure shared by all languages, in contradiction to Levelt’s view. In Gil’s work on Riau Indonesian, he proposes a conceptual structure that is quite different from that of English, adopted by Jackendoff as universal. We find no reason to disagree with Gil’s view. In this way, we take Gil’s work as vindicating Levelt’s view that during language production preverbal messages are encoded with different conceptual structures for different languages.
Collapse
Affiliation(s)
- Mario Bacelar Valente
- Department of Physical, Chemical and Natural Systems, Pablo de Olavide University, Seville, Spain,
| |
Collapse
|
7
|
Kirsch S, Elser C, Barbieri E, Kümmerer D, Weiller C, Musso M. Syntax Acquisition in Healthy Adults and Post-Stroke Individuals: The Intriguing Role of Grammatical Preference, Statistical Learning, and Education. Brain Sci 2022; 12:616. [PMID: 35625003 PMCID: PMC9139563 DOI: 10.3390/brainsci12050616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/03/2022] [Accepted: 05/05/2022] [Indexed: 11/17/2022] Open
Abstract
Previous work has provided contrasting evidence on syntax acquisition. Syntax-internal factors, i.e., instinctive knowledge of the universals of grammar (UG) for finite-state grammar (FSG) and phrase-structure grammar (PSG) but also syntax-external factors such as language competence, working memory (WM) and demographic factors may affect syntax acquisition. This study employed an artificial grammar paradigm to identify which factors predicted syntax acquisition. Thirty-seven healthy individuals and forty-nine left-hemispheric stroke patients (fourteen with aphasia) read syllable sequences adhering to or violating FSG and PSG. They performed preference classifications followed by grammatical classifications (after training). Results showed the best classification accuracy for sequences adhering to UG, with performance predicted by syntactic competence and spatial WM. Classification of ungrammatical sequences improved after training and was predicted by verbal WM. Although accuracy on FSG was better than on PSG, generalization was fully possible only for PSG. Education was the best predictor of syntax acquisition, while aphasia and lesion volume were not predictors. This study shows a clear preference for UG, which is influenced by spatial and linguistic knowledge, but not by the presence of aphasia. Verbal WM supported the identification of rule violations. Moreover, the acquisition of FSG and PSG was related to partially different mechanisms, but both depended on education.
Collapse
Affiliation(s)
- Simon Kirsch
- Department of Neurology, University Medical Center Freiburg, Breisacherstrasse 64, 79106 Freiburg, Germany; (S.K.); (C.E.); (D.K.); (C.W.)
- Clinic for Psychiatry and Psychotherapy, Hauptstraße 8, 79104 Freiburg, Germany
| | - Carolin Elser
- Department of Neurology, University Medical Center Freiburg, Breisacherstrasse 64, 79106 Freiburg, Germany; (S.K.); (C.E.); (D.K.); (C.W.)
| | - Elena Barbieri
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208-2952, USA;
| | - Dorothee Kümmerer
- Department of Neurology, University Medical Center Freiburg, Breisacherstrasse 64, 79106 Freiburg, Germany; (S.K.); (C.E.); (D.K.); (C.W.)
- Medizinische Akademie, Schule für Logopädie, Schönauer Str. 4, 79115 Freiburg, Germany
| | - Cornelius Weiller
- Department of Neurology, University Medical Center Freiburg, Breisacherstrasse 64, 79106 Freiburg, Germany; (S.K.); (C.E.); (D.K.); (C.W.)
| | - Mariacristina Musso
- Department of Neurology, University Medical Center Freiburg, Breisacherstrasse 64, 79106 Freiburg, Germany; (S.K.); (C.E.); (D.K.); (C.W.)
| |
Collapse
|
8
|
Cohn N, Schilperoord J. Remarks on Multimodality: Grammatical Interactions in the Parallel Architecture. Front Artif Intell 2022; 4:778060. [PMID: 35059636 PMCID: PMC8764459 DOI: 10.3389/frai.2021.778060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry "semantic weight" for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the "language faculty" itself.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg University, Tilburg, Netherlands
| | | |
Collapse
|
9
|
Child-directed speech is optimized for syntax-free semantic inference. Sci Rep 2021; 11:16527. [PMID: 34400656 PMCID: PMC8368066 DOI: 10.1038/s41598-021-95392-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/22/2021] [Indexed: 02/07/2023] Open
Abstract
The way infants learn language is a highly complex adaptive behavior. This behavior chiefly relies on the ability to extract information from the speech they hear and combine it with information from the external environment. Most theories assume that this ability critically hinges on the recognition of at least some syntactic structure. Here, we show that child-directed speech allows for semantic inference without relying on explicit structural information. We simulate the process of semantic inference with machine learning applied to large text collections of two different types of speech, child-directed speech versus adult-directed speech. Taking the core meaning of causality as a test case, we find that in child-directed speech causal meaning can be successfully inferred from simple co-occurrences of neighboring words. By contrast, semantic inference in adult-directed speech fundamentally requires additional access to syntactic structure. These results suggest that child-directed speech is ideally shaped for a learner who has not yet mastered syntactic structure.
Collapse
|
10
|
Human language evolution: a view from theoretical linguistics on how syntax and the lexicon first came into being. Primates 2021; 63:403-415. [PMID: 33821365 PMCID: PMC9463227 DOI: 10.1007/s10329-021-00891-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 01/25/2021] [Indexed: 11/21/2022]
Abstract
Human language is a multi-componential function comprising several sub-functions each of which may have evolved in other species independently of language. Among them, two sub-functions, or modules, have been claimed to be truly unique to the humans, namely hierarchical syntax (known as “Merge” in linguistics) and the “lexicon.” This kind of species-specificity stands as a hindrance to our natural understanding of human language evolution. Here we challenge this issue and advance our hypotheses on how human syntax and lexicon may have evolved from pre-existing cognitive capacities in our ancestors and other species including but not limited to nonhuman primates. Specifically, we argue that Merge evolved from motor action planning, and that the human lexicon with the distinction between lexical and functional categories evolved from its predecessors found in animal cognition through a process we call “disintegration.” We build our arguments on recent developments in generative grammar but crucially depart from some of its core ideas by borrowing insights from other relevant disciplines. Most importantly, we maintain that every sub-function of human language keeps evolutionary continuity with other species’ cognitive capacities and reject a saltational emergence of language in favor of its gradual evolution. By doing so, we aim to offer a firm theoretical background on which a promising scenario of language evolution can be constructed.
Collapse
|
11
|
Petkov CI, ten Cate C. Structured Sequence Learning: Animal Abilities, Cognitive Operations, and Language Evolution. Top Cogn Sci 2020; 12:828-842. [PMID: 31359600 PMCID: PMC7537567 DOI: 10.1111/tops.12444] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 06/20/2019] [Accepted: 06/20/2019] [Indexed: 11/28/2022]
Abstract
Human language is a salient example of a neurocognitive system that is specialized to process complex dependencies between sensory events distributed in time, yet how this system evolved and specialized remains unclear. Artificial Grammar Learning (AGL) studies have generated a wealth of insights into how human adults and infants process different types of sequencing dependencies of varying complexity. The AGL paradigm has also been adopted to examine the sequence processing abilities of nonhuman animals. We critically evaluate this growing literature in species ranging from mammals (primates and rats) to birds (pigeons, songbirds, and parrots) considering also cross-species comparisons. The findings are contrasted with seminal studies in human infants that motivated the work in nonhuman animals. This synopsis identifies advances in knowledge and where uncertainty remains regarding the various strategies that nonhuman animals can adopt for processing sequencing dependencies. The paucity of evidence in the few species studied to date and the need for follow-up experiments indicate that we do not yet understand the limits of animal sequence processing capacities and thereby the evolutionary pattern. This vibrant, yet still budding, field of research carries substantial promise for advancing knowledge on animal abilities, cognitive substrates, and language evolution.
Collapse
|
12
|
Mollica F, Siegelman M, Diachek E, Piantadosi ST, Mineroff Z, Futrell R, Kean H, Qian P, Fedorenko E. Composition is the Core Driver of the Language-selective Network. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:104-134. [PMID: 36794007 PMCID: PMC9923699 DOI: 10.1162/nol_a_00005] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 12/19/2019] [Indexed: 05/11/2023]
Abstract
The frontotemporal language network responds robustly and selectively to sentences. But the features of linguistic input that drive this response and the computations that these language areas support remain debated. Two key features of sentences are typically confounded in natural linguistic input: words in sentences (a) are semantically and syntactically combinable into phrase- and clause-level meanings, and (b) occur in an order licensed by the language's grammar. Inspired by recent psycholinguistic work establishing that language processing is robust to word order violations, we hypothesized that the core linguistic computation is composition, and, thus, can take place even when the word order violates the grammatical constraints of the language. This hypothesis predicts that a linguistic string should elicit a sentence-level response in the language network provided that the words in that string can enter into dependency relationships as in typical sentences. We tested this prediction across two fMRI experiments (total N = 47) by introducing a varying number of local word swaps into naturalistic sentences, leading to progressively less syntactically well-formed strings. Critically, local dependency relationships were preserved because combinable words remained close to each other. As predicted, word order degradation did not decrease the magnitude of the blood oxygen level-dependent response in the language network, except when combinable words were so far apart that composition among nearby words was highly unlikely. This finding demonstrates that composition is robust to word order violations, and that the language regions respond as strongly as they do to naturalistic linguistic input, providing that composition can take place.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Hope Kean
- Brain & Cognitive Sciences Department, MIT
| | - Peng Qian
- Brain & Cognitive Sciences Department, MIT
| | - Evelina Fedorenko
- Brain & Cognitive Sciences Department, MIT
- McGovern Institute for Brain Research, MIT
- Psychiatry Department, Massachusetts General Hospital
| |
Collapse
|
13
|
Cohn N, Engelen J, Schilperoord J. The grammar of emoji? Constraints on communicative pictorial sequencing. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:33. [PMID: 31471857 PMCID: PMC6717234 DOI: 10.1186/s41235-019-0177-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 06/03/2019] [Indexed: 11/10/2022]
Abstract
Emoji have become a prominent part of interactive digital communication. Here, we ask the questions: does a grammatical system govern the way people use emoji; and how do emoji interact with the grammar of written text? We conducted two experiments that asked participants to have a digital conversation with each other using only emoji (Experiment 1) or to substitute at least one emoji for a word in the sentences (Experiment 2). First, we found that the emoji-only utterances of participants remained at simplistic levels of patterning, primarily appearing as one-unit utterances (as formulaic expressions or responsive emotions) or as linear sequencing (for example, repeating the same emoji or providing an unordered list of semantically related emoji). Emoji playing grammatical roles (i.e., 'parts-of-speech') were minimal, and showed little consistency in 'word order'. Second, emoji were substituted more for nouns and adjectives than verbs, while also typically conveying nonredundant information to the sentences. These findings suggest that, while emoji may follow tendencies in their interactions with grammatical structure in multimodal text-emoji productions, they lack grammatical structure on their own.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands.
| | - Jan Engelen
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands
| | - Joost Schilperoord
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands
| |
Collapse
|
14
|
Huybregts MAC. Infinite Generation of Language Unreachable From a Stepwise Approach. Front Psychol 2019; 10:425. [PMID: 30949083 PMCID: PMC6436199 DOI: 10.3389/fpsyg.2019.00425] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 02/12/2019] [Indexed: 11/25/2022] Open
Abstract
Language is commonly thought of as a culturally evolved system of communication rather than a computational system for generating linguistic objects that express thought. Furthermore, language is commonly argued to have gradually evolved from finite proto-language which eventually developed into infinite language of modern humans. Both ideas are typically integrated in accounts that attempt to explain gradual evolution of more complex language from the increasingly strong pressures of communicative needs. Recently some arguments have been presented that the probability of the emergence of infinitely productive languages is increased by communicative pressures. These arguments fail. The question whether decidable languages evolve into infinite language is vacuous since infinite generation is the null hypothesis for a generative procedure. The argument that increasing cardinality leads to infinite language is incoherent since it essentially conflates concepts of performance with notions of competence. Recursive characterization of infinite language is perfectly consistent with finite output. Further, the discussion completely ignores a basic insight that language is not about decidability of weakly generated strings but rather about properties of strongly generated structures. Finally, the plausibility proof that infinite productivity evolves from finite language is false because it confuses (infinite) cardinal numbers with (natural) ordinal numbers. Infinite generation cannot be reached with a stepwise approach.
Collapse
Affiliation(s)
- M. A. C. Huybregts
- Utrecht Institute of Linguistics OTS (UIL-OTS), Utrecht University, Utrecht, Netherlands
| |
Collapse
|
15
|
Kolodny O, Edelman S. The evolution of the capacity for language: the ecological context and adaptive value of a process of cognitive hijacking. Philos Trans R Soc Lond B Biol Sci 2018; 373:rstb.2017.0052. [PMID: 29440518 DOI: 10.1098/rstb.2017.0052] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2017] [Indexed: 01/10/2023] Open
Abstract
Language plays a pivotal role in the evolution of human culture, yet the evolution of the capacity for language-uniquely within the hominin lineage-remains little understood. Bringing together insights from cognitive psychology, neuroscience, archaeology and behavioural ecology, we hypothesize that this singular occurrence was triggered by exaptation, or 'hijacking', of existing cognitive mechanisms related to sequential processing and motor execution. Observed coupling of the communication system with circuits related to complex action planning and control supports this proposition, but the prehistoric ecological contexts in which this coupling may have occurred and its adaptive value remain elusive. Evolutionary reasoning rules out most existing hypotheses regarding the ecological context of language evolution, which focus on ultimate explanations and ignore proximate mechanisms. Coupling of communication and motor systems, although possible in a short period on evolutionary timescales, required a multi-stepped adaptive process, involving multiple genes and gene networks. We suggest that the behavioural context that exerted the selective pressure to drive these sequential adaptations had to be one in which each of the systems undergoing coupling was independently necessary or highly beneficial, as well as frequent and recurring over evolutionary time. One such context could have been the teaching of tool production or tool use. In the present study, we propose the Cognitive Coupling hypothesis, which brings together these insights and outlines a unifying theory for the evolution of the capacity for language.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'.
Collapse
Affiliation(s)
- Oren Kolodny
- Department of Biology, Stanford University, Stanford, CA 94305, USA
| | - Shimon Edelman
- Department of Psychology, Cornell University, Ithaca, NY 14853-7601, USA
| |
Collapse
|
16
|
Dual neurobiological systems underlying language evolution: inferring the ancestral state. Curr Opin Behav Sci 2018. [DOI: 10.1016/j.cobeha.2018.05.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
17
|
Frank SL, Yang J. Lexical representation explains cortical entrainment during speech comprehension. PLoS One 2018; 13:e0197304. [PMID: 29771964 PMCID: PMC5957381 DOI: 10.1371/journal.pone.0197304] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 05/01/2018] [Indexed: 11/19/2022] Open
Abstract
Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
Collapse
Affiliation(s)
- Stefan L. Frank
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
- * E-mail:
| | - Jinbiao Yang
- Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| |
Collapse
|
18
|
|