1
|
Avcu E, Gow D. Exploring Abstract Pattern Representation in The Brain and Non-symbolic Neural Networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.27.568877. [PMID: 38076846 PMCID: PMC10705297 DOI: 10.1101/2023.11.27.568877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Human cognitive and linguistic generativity depends on the ability to identify abstract relationships between perceptually dissimilar items. Marcus et al. (1999) found that human infants can rapidly discover and generalize patterns of syllable repetition (reduplication) that depend on the abstract property of identity, but simple recurrent neural networks (SRNs) could not. They interpreted these results as evidence that purely associative neural network models provide an inadequate framework for characterizing the fundamental generativity of human cognition. Here, we present a series of deep long short-term memory (LSTM) models that identify abstract syllable repetition patterns and words based on training with cochleagrams that represent auditory stimuli. We demonstrate that models trained to identify individual syllable trigram words and models trained to identify reduplication patterns discover representations that support classification of abstract repetition patterns. Simulations examined the effects of training categories (words vs. patterns) and pretraining to identify syllables, on the development of hidden node representations that support repetition pattern discrimination. Representational similarity analyses (RSA) comparing patterns of regional brain activity based on MRI-constrained MEG/EEG data to patterns of hidden node activation elicited by the same stimuli showed a significant correlation between brain activity localized in primarily posterior temporal regions and representations discovered by the models. These results suggest that associative mechanisms operating over discoverable representations that capture abstract stimulus properties account for a critical example of human cognitive generativity.
Collapse
Affiliation(s)
- Enes Avcu
- Department of Neurology, Massachusetts General Hospital, Cambridge, MA 02170
| | - David Gow
- Department of Neurology, Massachusetts General Hospital, Cambridge, MA 02170
| |
Collapse
|
2
|
Gow DW, Avcu E, Schoenhaut A, Sorensen DO, Ahlfors SP. Abstract representations in temporal cortex support generative linguistic processing. LANGUAGE, COGNITION AND NEUROSCIENCE 2022; 38:765-778. [PMID: 37332658 PMCID: PMC10270390 DOI: 10.1080/23273798.2022.2157029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 11/21/2022] [Indexed: 06/20/2023]
Abstract
Generativity, the ability to create and evaluate novel constructions, is a fundamental property of human language and cognition. The productivity of generative processes is determined by the scope of the representations they engage. Here we examine the neural representation of reduplication, a productive phonological process that can create novel forms through patterned syllable copying (e.g. ba-mih → ba-ba-mih, ba-mih-mih, or ba-mih-ba). Using MRI-constrained source estimates of combined MEG/EEG data collected during an auditory artificial grammar task, we identified localized cortical activity associated with syllable reduplication pattern contrasts in novel trisyllabic nonwords. Neural decoding analyses identified a set of predominantly right hemisphere temporal lobe regions whose activity reliably discriminated reduplication patterns evoked by untrained, novel stimuli. Effective connectivity analyses suggested that sensitivity to abstracted reduplication patterns was propagated between these temporal regions. These results suggest that localized temporal lobe activity patterns function as abstract representations that support linguistic generativity.
Collapse
Affiliation(s)
- David W. Gow
- Department of Neurology Massachusetts General Hospital and Harvard Medical School; Boston, MA, 02114
- Department of Psychology, Salem State University; Salem, MA, 01970
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital; Charlestown, MA, 02129
- Program in Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School; Boston, MA 02115
| | - Enes Avcu
- Department of Neurology Massachusetts General Hospital and Harvard Medical School; Boston, MA, 02114
| | - Adriana Schoenhaut
- Department of Neurology Massachusetts General Hospital and Harvard Medical School; Boston, MA, 02114
| | - David O. Sorensen
- Program in Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School; Boston, MA 02115
| | - Seppo P. Ahlfors
- Program in Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School; Boston, MA 02115
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School; Boston, MA, 02114
| |
Collapse
|
3
|
Wulfert S, Auer P, Hanulíková A. Speech Errors in the Production of Initial Consonant Clusters: The Roles of Frequency and Sonority. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3709-3729. [PMID: 36198060 DOI: 10.1044/2022_jslhr-22-00148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE One of the central questions in speech production research is to what degree certain structures have an inherent difficulty and to what degree repeated encounter and practice make them easier to process. The goal of this article was to determine the extent to which frequency and sonority distance of consonant clusters predict production difficulties. METHOD We used a tongue twister paradigm to elicit speech errors on syllable-initial German consonant clusters and investigated the relative influences of cluster frequency and sonority distance between the consonants of a cluster on production accuracy. Native speakers of German produced pairs of monosyllabic pseudowords beginning with consonant clusters at a high speech rate. RESULTS Error rates decreased with increasing frequency of the consonant clusters. A high sonority distance, on the other hand, did not facilitate a cluster's production, but speech errors led to optimized sonority structure for a subgroup of clusters. In addition, the combination of consonant clusters in a stimulus pair has a great impact on production accuracy. CONCLUSION These results suggest that both frequency of use and sonority distance codetermine production ease, as well as syntagmatic competition between adjacent sound sequences.
Collapse
Affiliation(s)
- Sophia Wulfert
- Department of German Studies, University of Freiburg, Germany
| | - Peter Auer
- Department of German Studies, University of Freiburg, Germany
| | - Adriana Hanulíková
- Department of German Studies, University of Freiburg, Germany
- Freiburg Institute for Advanced Studies, University of Freiburg, Germany
| |
Collapse
|
4
|
Beguš G. Local and non-local dependency learning and emergence of rule-like representations in speech data by deep convolutional generative adversarial networks. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2021.101244] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
Abstract
The present study investigates effects of conventionally metered and rhymed poetry on eyemovements
in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed
language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose
layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We
hypothesized that silently reading MRRL results in building up auditive expectations that
are based on a rhythmic “audible gestalt” and propose that rhythmicity is generated through
subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies
but showed differential effects in poem and prose layouts. Metrical anomalies in particular
resulted in robust reading disruptions across a variety of eye-movement measures in
the poem layout and caused re-reading of the local context. Rhyme anomalies elicited
stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The
presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also
affected reading in general. Effects of syllable number indicated a high degree of subvocalization.
The overall pattern of results suggests that eye-movements reflect, and are closely
aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes
to the discussion of how the processing of rhythm in music and speech may overlap.
Collapse
Affiliation(s)
- Judith Beck
- Cognitive Science, University of Freiburg,, Germany
| | | |
Collapse
|
6
|
Napoli DJ, Ferrara C. Correlations Between Handshape and Movement in Sign Languages. Cogn Sci 2021; 45:e12944. [PMID: 34018242 PMCID: PMC8243953 DOI: 10.1111/cogs.12944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 12/27/2020] [Accepted: 12/31/2020] [Indexed: 12/04/2022]
Abstract
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.
Collapse
|
7
|
Gow DW, Schoenhaut A, Avcu E, Ahlfors SP. Behavioral and Neurodynamic Effects of Word Learning on Phonotactic Repair. Front Psychol 2021; 12:590155. [PMID: 33776832 PMCID: PMC7987836 DOI: 10.3389/fpsyg.2021.590155] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 02/04/2021] [Indexed: 11/22/2022] Open
Abstract
Processes governing the creation, perception and production of spoken words are sensitive to the patterns of speech sounds in the language user's lexicon. Generative linguistic theory suggests that listeners infer constraints on possible sound patterning from the lexicon and apply these constraints to all aspects of word use. In contrast, emergentist accounts suggest that these phonotactic constraints are a product of interactive associative mapping with items in the lexicon. To determine the degree to which phonotactic constraints are lexically mediated, we observed the effects of learning new words that violate English phonotactic constraints (e.g., srigin) on phonotactic perceptual repair processes in nonword consonant-consonant-vowel (CCV) stimuli (e.g., /sre/). Subjects who learned such words were less likely to "repair" illegal onset clusters (/sr/) and report them as legal ones (/∫r/). Effective connectivity analyses of MRI-constrained reconstructions of simultaneously collected magnetoencephalography (MEG) and EEG data showed that these behavioral shifts were accompanied by changes in the strength of influences of lexical areas on acoustic-phonetic areas. These results strengthen the interpretation of previous results suggesting that phonotactic constraints on perception are produced by top-down lexical influences on speech processing.
Collapse
Affiliation(s)
- David W. Gow
- Department of Neurology, Massachusetts General Hospital, Boston, MA, United States
- Department of Psychology, Salem State University, Salem, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, United States
| | - Adriana Schoenhaut
- Department of Neurology, Massachusetts General Hospital, Boston, MA, United States
| | - Enes Avcu
- Department of Neurology, Massachusetts General Hospital, Boston, MA, United States
| | - Seppo P. Ahlfors
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
8
|
Xue S, Jacobs AM, Lüdtke J. What Is the Difference? Rereading Shakespeare's Sonnets -An Eye Tracking Study. Front Psychol 2020; 11:421. [PMID: 32273860 PMCID: PMC7113389 DOI: 10.3389/fpsyg.2020.00421] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 02/24/2020] [Indexed: 11/18/2022] Open
Abstract
Texts are often reread in everyday life, but most studies of rereading have been based on expository texts, not on literary ones such as poems, though literary texts may be reread more often than others. To correct this bias, the present study is based on two of Shakespeare's sonnets. Eye movements were recorded, as participants read a sonnet then read it again after a few minutes. After each reading, comprehension and appreciation were measured with the help of a questionnaire. In general, compared to the first reading, rereading improved the fluency of reading (shorter total reading times, shorter regression times, and lower fixation probability) and the depth of comprehension. Contrary to the other rereading studies using literary texts, no increase in appreciation was apparent. Moreover, results from a predictive modeling analysis showed that readers' eye movements were determined by the same critical psycholinguistic features throughout the two sessions. Apparently, even in the case of poetry, the eye movement control in reading is determined mainly by surface features of the text, unaffected by repetition.
Collapse
Affiliation(s)
- Shuwei Xue
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Arthur M. Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Berlin, Germany
| | - Jana Lüdtke
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
9
|
Hueber T, Tatulli E, Girin L, Schwartz JL. Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning. Neural Comput 2020; 32:596-625. [PMID: 31951798 DOI: 10.1162/neco_a_01264] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sensory processing is increasingly conceived in a predictive framework in which neurons would constantly process the error signal resulting from the comparison of expected and observed stimuli. Surprisingly, few data exist on the accuracy of predictions that can be computed in real sensory scenes. Here, we focus on the sensory processing of auditory and audiovisual speech. We propose a set of computational models based on artificial neural networks (mixing deep feedforward and convolutional networks), which are trained to predict future audio observations from present and past audio or audiovisual observations (i.e., including lip movements). Those predictions exploit purely local phonetic regularities with no explicit call to higher linguistic levels. Experiments are conducted on the multispeaker LibriSpeech audio speech database (around 100 hours) and on the NTCD-TIMIT audiovisual speech database (around 7 hours). They appear to be efficient in a short temporal range (25-50 ms), predicting 50% to 75% of the variance of the incoming stimulus, which could result in potentially saving up to three-quarters of the processing power. Then they quickly decrease and almost vanish after 250 ms. Adding information on the lips slightly improves predictions, with a 5% to 10% increase in explained variance. Interestingly the visual gain vanishes more slowly, and the gain is maximum for a delay of 75 ms between image and predicted sound.
Collapse
Affiliation(s)
- Thomas Hueber
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| | - Eric Tatulli
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| | - Laurent Girin
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France, and Inria Grenoble-Rhône-Alpes, 38330 Montbonnot-Saint Martin, France
| | - Jean-Luc Schwartz
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France
| |
Collapse
|
10
|
Bucci J, Lorusso P, Gerber S, Grimaldi M, Schwartz JL. Assessing the Representation of Phonological Rules by a Production Study of Non-Words in Coratino. PHONETICA 2019; 77:405-428. [PMID: 31825928 DOI: 10.1159/000504452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 10/19/2019] [Indexed: 06/10/2023]
Abstract
Phonological regularities in a given language can be described as a set of formal rules applied to logical expressions (e.g., the value of a distinctive feature) or alternatively as distributional properties emerging from the phonetic substance. An indirect way to assess how phonology is represented in a speaker's mind consists in testing how phonological regularities are transferred to non-words. This is the objective of this study, focusing on Coratino, a dialect from southern Italy spoken in the Apulia region. In Coratino, a complex process of vowel reduction operates, transforming the /i e ɛ u o ɔ a/ system for stressed vowels into a system with a smaller number of vowels for unstressed configurations, characterized by four major properties: (1) all word-initial vowels are maintained, even unstressed; (2) /a/ is never reduced, even unstressed; (3) unstressed vowels /i e ɛ u o ɔ/ are protected against reduction when they are adjacent to a consonant that shares articulation (labiality and velarity for /u o ɔ/ and palatality for /i e ɛ/); (4) when they are reduced, high vowels are reduced to /ɨ/ and mid vowels to /ə/. A production experiment was carried out on 19 speakers of Coratino to test whether these properties were displayed with non-words. The production data display a complex pattern which seems to imply both explicit/formal rules and distributional properties transferred statistically to non-words. Furthermore, the speakers appear to vary considerably in how they perform this task. Altogether, this suggests that both formal rules and distributional principles contribute to the encoding of Coratino phonology in the speaker's mind.
Collapse
Affiliation(s)
| | | | - Silvain Gerber
- GIPSA-lab, CNRS, Université Grenoble Alpes, Grenoble, France
| | | | | |
Collapse
|
11
|
Xue S, Lüdtke J, Sylvester T, Jacobs AM. Reading Shakespeare Sonnets: Combining Quantitative Narrative Analysis and Predictive Modeling -an Eye Tracking Study. J Eye Mov Res 2019; 12:10.16910/jemr.12.5.2. [PMID: 33828746 PMCID: PMC7968390 DOI: 10.16910/jemr.12.5.2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
As a part of a larger interdisciplinary project on Shakespeare sonnets' reception (1, 2), the present study analyzed the eye movement behavior of participants reading three of the 154 sonnets as a function of seven lexical features extracted via Quantitative Narrative Analysis (QNA). Using a machine learning-based predictive modeling approach five 'surface' features (word length, orthographic neighborhood density, word frequency, orthographic dissimilarity and sonority score) were detected as important predictors of total reading time and fixation probability in poetry reading. The fact that one phonological feature, i.e., sonority score, also played a role is in line with current theorizing on poetry reading. Our approach opens new ways for future eye movement research on reading poetic texts and other complex literary materials(3).
Collapse
|
12
|
Berent I. Is markedness a confused concept? Cogn Neuropsychol 2018; 34:493-499. [PMID: 29457556 DOI: 10.1080/02643294.2017.1422485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
It is well known that, across languages, certain phonological features are more frequent than others. But whether these facts reflect abstract universal markedness constraints or functional pressures (auditory and articulatory difficulties and lexical frequency) is unknown. Romani, Galuzzi, Guariglia, and Goslin (2017) report that the putative markedness of phonological features captures their order of acquisition and their propensity to elicit errors in patients with an apraxia of speech (but not in phonological aphasia). The authors believe these results challenge the existence of abstract markedness constraints. They also raise some concerns about the explanatory utility of the markedness hypothesis. This commentary demonstrates that markedness is not inherently vague or vacuous nor is it falsified by Romani et al.'s empirical findings. As such, these results leave wide open the possibility that some phonological markedness constraints are abstract.
Collapse
Affiliation(s)
- Iris Berent
- a Department of Psychology , Northeastern University , Boston , MA , USA
| |
Collapse
|
13
|
Zhao X, Berent I. The Basis of the Syllable Hierarchy: Articulatory Pressures or Universal Phonological Constraints? JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:29-64. [PMID: 28710697 DOI: 10.1007/s10936-017-9510-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Across languages, certain syllable types are systematically preferred to others (e.g., [Formula: see text] lbif, where [Formula: see text] indicates a preference). Previous research has shown that these preferences are active in the brains of individual speakers, they are evident even when none of these syllable types exists in participants' language, and even when the stimuli are presented in print. These results suggest that the syllable hierarchy cannot be reduced to either lexical or auditory/phonetic pressures. Here, we examine whether the syllable hierarchy is due to articulatory pressures. According to the motor embodiment view, the perception of a linguistic stimulus requires simulating its production; dispreferred syllables (e.g., lbif) are universally disliked because their production is harder to simulate. To address this possibility, we assessed syllable preferences while articulation was mechanically suppressed. Our four experiments each found significant effects of suppression. Remarkably, people remained sensitive to the syllable hierarchy regardless of suppression. Specifically, results with auditory materials (Experiments 1-2) showed strong effects of syllable structure irrespective of suppression. Moreover, syllable structure uniquely accounted for listeners' behavior even when controlling for several phonetic characteristics of our auditory materials. Results with printed stimuli (Experiments 3-4) were more complex, as participants in these experiments relied on both phonological and graphemic information. Nonetheless, readers were sensitive to most of the syllable hierarchy (e.g., [Formula: see text]), and these preferences emerged when articulation was suppressed, and even when the statistical properties of our materials were controlled via a regression analysis. Together, these findings indicate that speakers possess broad grammatical preferences that are irreducible to either sensory or motor factors.
Collapse
Affiliation(s)
- Xu Zhao
- Department of Psychology, Northeastern University, 125 Nightingale, 360 Huntington Ave, Boston, MA, 02115, USA
| | - Iris Berent
- Department of Psychology, Northeastern University, 125 Nightingale, 360 Huntington Ave, Boston, MA, 02115, USA.
| |
Collapse
|
14
|
Romani C, Galuzzi C, Guariglia C, Goslin J. Comparing phoneme frequency, age of acquisition, and loss in aphasia: Implications for phonological universals. Cogn Neuropsychol 2017; 34:449-471. [PMID: 28914137 DOI: 10.1080/02643294.2017.1369942] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Phonological complexity may be central to the nature of human language. It may shape the distribution of phonemes and phoneme sequences within languages, but also determine age of acquisition and susceptibility to loss in aphasia. We evaluated this claim using frequency statistics derived from a corpus of phonologically transcribed Italian words (phonitalia, available at phonitalia,org), rankings of phoneme age of acquisition (AoA) and rate of phoneme errors in patients with apraxia of speech (AoS) as an indication of articulatory complexity. These measures were related to cross-linguistically derived markedness rankings. We found strong correspondences. AoA, however, was predicted by both apraxic errors and frequency, suggesting independent contributions of these variables. Our results support the reality of universal principles of complexity. In addition they suggest that these complexity principles have articulatory underpinnings since they modulate the production of patients with AoS, but not the production of patients with more central phonological difficulties.
Collapse
Affiliation(s)
- Cristina Romani
- a School of Life and Health Sciences , Aston University , Birmingham , UK
| | - Claudia Galuzzi
- b Neuropsychology , IRCCS Fondazione Santa Lucia , Rome , Italy
| | - Cecilia Guariglia
- b Neuropsychology , IRCCS Fondazione Santa Lucia , Rome , Italy.,c Department of Psychology , Sapienza University , Rome , Italy
| | - Jeremy Goslin
- d School of Psychology , University of Plymouth , Plymouth , UK
| |
Collapse
|
15
|
Abstract
We report on an English-speaking, aphasic individual (TB) who showed a striking dissociation in speaking with the different forms (allomorphs) that an inflection can take. Although very accurate in producing the consonantal inflections (-/s/, -/z/, -/d/, -/t/), TB consistently omitted syllabic inflections (-/əz/, -/əd/), therefore correctly saying "dogs" or "walked," but "bench" for benches or "skate" for skated. Results from control tests ruled out that TB's selective difficulties stemmed from problems in selecting the correct inflection for the syntactic context or problems related to phonological or articulatory mechanisms. TB's selective difficulties appeared instead to concern morpho-phonological mechanisms responsible for adapting morphological elements to word phonology. These mechanisms determine whether the plural inflection surfaces in the noun bench as voiced (-/z/), unvoiced (-/s/) or syllabic (-/əz/). Our results have implications for understanding how morphological elements are encoded in the lexicon and the nature of morpho-phonological mechanisms involved in speech production.
Collapse
Affiliation(s)
| | - Michele Miozzo
- a Department of Psychology , The New School , New York , NY , USA
| |
Collapse
|
16
|
Abstract
Does knowledge of language consist of abstract principles, or is it fully embodied in the sensorimotor system? To address this question, we investigate the double identity of doubling (e.g., slaflaf, or generally, XX; where X stands for a phonological constituent). Across languages, doubling is known to elicit conflicting preferences at different levels of linguistic analysis (phonology vs. morphology). Here, we show that these preferences are active in the brains of individual speakers, and they are demonstrably distinct from sensorimotor pressures. We first demonstrate that doubling in novel English words elicits divergent percepts: Viewed as meaningless (phonological) forms, doubling is disliked (e.g., slaflaf < slafmak), but once doubling in form is systematically linked to meaning (e.g., slaf = ball, slaflaf = balls), the doubling aversion shifts into a reliable (morphological) preference. We next show that sign-naive speakers spontaneously project these principles to novel signs in American Sign Language, and their capacity to do so depends on the structure of their spoken language (English vs. Hebrew). These results demonstrate that linguistic preferences doubly dissociate from sensorimotor demands: A single stimulus can elicit diverse percepts, yet these percepts are invariant across stimulus modality--for speech and signs. These conclusions are in line with the possibility that some linguistic principles are abstract, and they apply broadly across language modality.
Collapse
|
17
|
Berent I. Commentary: "An Evaluation of Universal Grammar and the Phonological Mind"-UG Is Still a Viable Hypothesis. Front Psychol 2016; 7:1029. [PMID: 27471480 PMCID: PMC4943953 DOI: 10.3389/fpsyg.2016.01029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
Everett (2016b) criticizes The Phonological Mind thesis (Berent, 2013a,b) on logical, methodological and empirical grounds. Most of Everett’s concerns are directed toward the hypothesis that the phonological grammar is constrained by universal grammatical (UG) principles. Contrary to Everett’s logical challenges, here I show that the UG hypothesis is readily falsifiable, that universality is not inconsistent with innateness (Everett’s arguments to the contrary are rooted in a basic confusion of the UG phenotype and the genotype), and that its empirical evaluation does not require a full evolutionary account of language. A detailed analysis of one case study, the syllable hierarchy, presents a specific demonstration that people have knowledge of putatively universal principles that are unattested in their language and these principles are most likely linguistic in nature. Whether Universal Grammar exists remains unknown, but Everett’s arguments hardly undermine the viability of this hypothesis.
Collapse
Affiliation(s)
- Iris Berent
- Phonology and Reading Laboratory, Department of Psychology, Northeastern University, Boston MA, USA
| |
Collapse
|
18
|
Obrig H, Mentzel J, Rossi S. Universal and language-specific sublexical cues in speech perception: a novel electroencephalography-lesion approach. Brain 2016; 139:1800-16. [DOI: 10.1093/brain/aww077] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 02/24/2016] [Indexed: 11/12/2022] Open
|
19
|
Everett DL. An Evaluation of Universal Grammar and the Phonological Mind. Front Psychol 2016; 7:15. [PMID: 26903889 PMCID: PMC4744836 DOI: 10.3389/fpsyg.2016.00015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 01/06/2016] [Indexed: 12/01/2022] Open
Abstract
This paper argues against the hypothesis of a “phonological mind” advanced by Berent. It establishes that there is no evidence that phonology is innate and that, in fact, the simplest hypothesis seems to be that phonology is learned like other human abilities. Moreover, the paper fleshes out the original claim of Philip Lieberman that Universal Grammar predicts that not everyone should be able to learn every language, i.e., the opposite of what UG is normally thought to predict. The paper also underscores the problem that the absence of recursion in Pirahã represents for Universal Grammar proposals.
Collapse
Affiliation(s)
- Daniel L Everett
- Department of Arts and Sciences, Bentley University Waltham, MA, USA
| |
Collapse
|
20
|
Öttl B, Jäger G, Kaup B. Does formal complexity reflect cognitive complexity? Investigating aspects of the Chomsky Hierarchy in an artificial language learning study. PLoS One 2015; 10:e0123059. [PMID: 25885790 PMCID: PMC4401728 DOI: 10.1371/journal.pone.0123059] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2014] [Accepted: 02/12/2015] [Indexed: 11/20/2022] Open
Abstract
This study investigated whether formal complexity, as described by the Chomsky Hierarchy, corresponds to cognitive complexity during language learning. According to the Chomsky Hierarchy, nested dependencies (context-free) are less complex than cross-serial dependencies (mildly context-sensitive). In two artificial grammar learning (AGL) experiments participants were presented with a language containing either nested or cross-serial dependencies. A learning effect for both types of dependencies could be observed, but no difference between dependency types emerged. These behavioral findings do not seem to reflect complexity differences as described in the Chomsky Hierarchy. This study extends previous findings in demonstrating learning effects for nested and cross-serial dependencies with more natural stimulus materials in a classical AGL paradigm after only one hour of exposure. The current findings can be taken as a starting point for further exploring the degree to which the Chomsky Hierarchy reflects cognitive processes.
Collapse
Affiliation(s)
- Birgit Öttl
- Department of Psychology, Eberhard Karls University, Tübingen, Germany
- * E-mail:
| | - Gerhard Jäger
- Department of Linguistics, Eberhard Karls University, Tübingen, Germany
| | - Barbara Kaup
- Department of Psychology, Eberhard Karls University, Tübingen, Germany
| |
Collapse
|
21
|
Berent I, Dupuis A, Brentari D. Phonological reduplication in sign language: Rules rule. Front Psychol 2014; 5:560. [PMID: 24959158 PMCID: PMC4050968 DOI: 10.3389/fpsyg.2014.00560] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2014] [Accepted: 05/20/2014] [Indexed: 11/13/2022] Open
Abstract
Productivity-the hallmark of linguistic competence-is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL). As a case study, we examine reduplication (X→XX)-a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such a rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating), and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task). The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
Collapse
Affiliation(s)
- Iris Berent
- Department of Psychology, Northeastern UniversityBoston, MA, USA
| | - Amanda Dupuis
- Department of Psychology, Northeastern UniversityBoston, MA, USA
| | - Diane Brentari
- Department of Linguistics, University of ChicagoChicago, IL, USA
| |
Collapse
|