1
|
Todorović S, Anton JL, Sein J, Nazarian B, Chanoine V, Rauchbauer B, Kotz SA, Runnqvist E. Cortico-Cerebellar Monitoring of Speech Sequence Production. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:701-721. [PMID: 39175789 PMCID: PMC11338302 DOI: 10.1162/nol_a_00113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 06/27/2023] [Indexed: 08/24/2024]
Abstract
In a functional magnetic resonance imaging study, we examined speech error monitoring in a cortico-cerebellar network for two contrasts: (a) correct trials with high versus low articulatory error probability and (b) overtly committed errors versus correct trials. Engagement of the cognitive cerebellar region Crus I in both contrasts suggests that this region is involved in overarching performance monitoring. The activation of cerebellar motor regions (superior medial cerebellum, lobules VI and VIII) indicates the additional presence of a sensorimotor driven implementation of control. The combined pattern of pre-supplementary motor area (active across contrasts) and anterior cingulate cortex (only active in the contrast involving overt errors) activations suggests sensorimotor driven feedback monitoring in the medial frontal cortex, making use of proprioception and auditory feedback through overt errors. Differential temporal and parietal cortex activation across contrasts indicates involvement beyond sensorimotor driven feedback in line with speech production models that link these regions to auditory target processing and internal modeling-like mechanisms. These results highlight the presence of multiple, possibly hierarchically interdependent, mechanisms that support the optimizing of speech production.
Collapse
Affiliation(s)
- Snežana Todorović
- Laboratoire Parole et Langage, CNRS–Aix-Marseille Université, Aix-en-Provence, France
- Institute of Language, Communication and the Brain, Aix-en-Provence, France
| | - Jean-Luc Anton
- Centre IRM, Marseille, France
- INT, CNRS–Aix-Marseille Université, Marseille, France
| | - Julien Sein
- Centre IRM, Marseille, France
- INT, CNRS–Aix-Marseille Université, Marseille, France
| | - Bruno Nazarian
- Centre IRM, Marseille, France
- INT, CNRS–Aix-Marseille Université, Marseille, France
| | - Valérie Chanoine
- Institute of Language, Communication and the Brain, Aix-en-Provence, France
| | - Birgit Rauchbauer
- Laboratoire Parole et Langage, CNRS–Aix-Marseille Université, Aix-en-Provence, France
- Institute of Language, Communication and the Brain, Aix-en-Provence, France
| | - Sonja A. Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Elin Runnqvist
- Laboratoire Parole et Langage, CNRS–Aix-Marseille Université, Aix-en-Provence, France
- Institute of Language, Communication and the Brain, Aix-en-Provence, France
| |
Collapse
|
2
|
Balčiūnienė I, Kornev AN. Linguistic disfluencies in Russian-speaking typically and atypically developing children: individual variability in different contexts. CLINICAL LINGUISTICS & PHONETICS 2024; 38:287-306. [PMID: 36787206 DOI: 10.1080/02699206.2023.2176786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 01/21/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
Disfluency in children and adults seems to occur like errors of speech but, at the same time, is an essential feature of spontaneous (unprepared) speech. The present study aimed to evaluate linguistic disfluencies in typically and atypically developing Russian-speaking children from the perspective of the dynamic adaptive model of self-monitoring in speech production. The study collected four language samples from 10 six-year-old children with developmental language disorder and 14 typically developing peers: two storytelling tasks, structured conversation, and a play argument. After transcribing audio-recordings and marking linguistic disfluencies, the authors conducted structured distributional analysis. The distribution of several indexes of disfluency was estimated to assess the prevalence and profiles of different (sub)types of disfluencies. The disfluency rate statistics were similar between the typically developing children and children with developmental language disorder. The distributional indexes score showed that tasks significantly impacted the rate of different (sub)types of disfluencies. Task-related patterns in a set of the distributional indexes significantly distinguished the groups. Thus, changes in the disfluency profile related to different external factors, as a sign of a flexibility of an adaptive self-monitoring system, may be limited in children with developmental language disorder.
Collapse
Affiliation(s)
- Ingrida Balčiūnienė
- Department of Lithuanian Studies, Vytautas Magnus University, Kaunas, Lithuania
| | - Aleksandr N Kornev
- Department of Logopathology, Saint-Petersburg State Pediatric Medical University, Saint-Petersburg, Russia
| |
Collapse
|
3
|
Oppenheim GM, Nozari N. Similarity-induced interference or facilitation in language production reflects representation, not selection. Cognition 2024; 245:105720. [PMID: 38266353 DOI: 10.1016/j.cognition.2024.105720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/02/2024] [Accepted: 01/09/2024] [Indexed: 01/26/2024]
Abstract
Researchers have long interpreted the presence or absence of semantic interference in picture naming latencies as confirming or refuting theoretical claims regarding competitive lexical selection. But inconsistent empirical results challenge any mechanistic interpretation. A behavioral experiment first verified an apparent boundary condition in a blocked picture naming task: when orthogonally manipulating association type, taxonomic associations consistently elicit interference, while thematic associations do not. A plausible representational difference is that thematic feature activations depend more on supporting contexts. Simulations show that context-sensitivity emerges from the distributional statistics that are often used to measure thematic associations: residual semantic activation facilitates the retrieval of words that share semantic features, counteracting learning-based interference, and training a production model with greater sequential cooccurrence for thematically related words causes it to acquire stronger residual activation for thematic features. Modulating residual activation, either directly or through training, allows the model to capture gradient values of interference and facilitation, and in every simulation competitive and noncompetitive selection algorithms produce qualitatively equivalent results.
Collapse
Affiliation(s)
- Gary M Oppenheim
- Department of Psychology, Bangor University, Bangor, Wales, UK; Department of Psychology, The University of Texas at Austin, USA.
| | - Nazbanou Nozari
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA; Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
4
|
Alexander JM, Hedrick T, Stark BC. Inner speech in the daily lives of people with aphasia. Front Psychol 2024; 15:1335425. [PMID: 38577124 PMCID: PMC10991845 DOI: 10.3389/fpsyg.2024.1335425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/26/2024] [Indexed: 04/06/2024] Open
Abstract
Introduction This exploratory, preliminary, feasibility study evaluated the extent to which adults with chronic aphasia (N = 23) report experiencing inner speech in their daily lives by leveraging experience sampling and survey methodology. Methods The presence of inner speech was assessed at 30 time-points and themes of inner speech at three time-points, over the course of three weeks. The relationship of inner speech to aphasia severity, demographic information (age, sex, years post-stroke), and insight into language impairment was evaluated. Results There was low attrition (<8%) and high compliance (>94%) for the study procedures, and inner speech was experienced in most sampled instances (>78%). The most common themes of inner speech experience across the weeks were 'when remembering', 'to plan', and 'to motivate oneself'. There was no significant relationship identified between inner speech and aphasia severity, insight into language impairment, or demographic information. In conclusion, adults with aphasia tend to report experiencing inner speech often, with some shared themes (e.g., remembering, planning), and use inner speech to explore themes that are uncommon in young adults in other studies (e.g., to talk to themselves about health). Discussion High compliance and low attrition suggest design feasibility, and results emphasize the importance of collecting data in age-similar, non-brain-damaged peers as well as in adults with other neurogenic communication disorders to fully understand the experience and use of inner speech in daily life. Clinical implications and future directions are discussed.
Collapse
Affiliation(s)
- Julianne M. Alexander
- Department of Speech, Language and Hearing Science, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| | - Tessa Hedrick
- Department of Speech, Language and Hearing Science, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| | - Brielle C. Stark
- Department of Speech, Language and Hearing Science, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| |
Collapse
|
5
|
Meier AM, Guenther FH. Neurocomputational modeling of speech motor development. JOURNAL OF CHILD LANGUAGE 2023; 50:1318-1335. [PMID: 37337871 PMCID: PMC10615680 DOI: 10.1017/s0305000923000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
This review describes a computational approach for modeling the development of speech motor control in infants. We address the development of two levels of control: articulation of individual speech sounds (defined here as phonemes, syllables, or words for which there is an optimized motor program) and production of sound sequences such as phrases or sentences. We describe the DIVA model of speech motor control and its application to the problem of learning individual sounds in the infant's native language. Then we describe the GODIVA model, an extension of DIVA, and how chunking of frequently produced phoneme sequences is implemented within it.
Collapse
Affiliation(s)
- Andrew M Meier
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA02215
| | - Frank H Guenther
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA02215
- Department of Biomedical Engineering, Boston University, Boston, MA02215
| |
Collapse
|
6
|
Navarrete E, De Pedis M, Lorenzoni A. Verbal deception in picture naming. Q J Exp Psychol (Hove) 2023; 76:2390-2400. [PMID: 36475941 DOI: 10.1177/17470218221146540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Telling a lie requires several cognitive processes. We investigated three cognitive processes involved in verbal deception: the decision to deceive, the suppression of the true statement, and the construction of the false statement. In a standard picture-naming task, participants were instructed to commit true and false naming statements. Critically, participants could freely decide to name the picture (i.e., true naming events) or to commit a verbal deception and use a different name (i.e., false naming events). Different types of analysis were performed with the aim of exploring the influence of semantic, lexical, and phonological information of the target picture in the decision, suppression, and construction processes. The first type of analysis revealed that participants decided to lie more often when the target picture was less typical or less familiar. The second and third types of analysis focused on the false naming events. False naming latencies turned out to be faster when the name of the target picture was a highly frequent or an earlier-acquired name, suggesting an influence of lexical variables in the suppression of the true statement. The third analysis type explored the phonological relationship between the word that participants uttered in the false statements and the target picture name. No phonological influences emerged in this last analysis. These findings demonstrate that verbal deception is tied to semantic and lexical variables corresponding to true statements.
Collapse
Affiliation(s)
- Eduardo Navarrete
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione (DPSS), Università di Padova, Padova, Italy
| | - Marta De Pedis
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Spain
| | - Anna Lorenzoni
- Dipartimento di Psicologia dello Sviluppo e della Socializzazione (DPSS), Università di Padova, Padova, Italy
| |
Collapse
|
7
|
McCall JD, DeMarco AT, Mandal AS, Fama ME, van der Stelt CM, Lacey EH, Laks AB, Snider SF, Friedman RB, Turkeltaub PE. Listening to Yourself and Watching Your Tongue: Distinct Abilities and Brain Regions for Monitoring Semantic and Phonological Speech Errors. J Cogn Neurosci 2023; 35:1169-1194. [PMID: 37159232 PMCID: PMC10273223 DOI: 10.1162/jocn_a_02000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Despite the many mistakes we make while speaking, people can effectively communicate because we monitor our speech errors. However, the cognitive abilities and brain structures that support speech error monitoring are unclear. There may be different abilities and brain regions that support monitoring phonological speech errors versus monitoring semantic speech errors. We investigated speech, language, and cognitive control abilities that relate to detecting phonological and semantic speech errors in 41 individuals with aphasia who underwent detailed cognitive testing. Then, we used support vector regression lesion symptom mapping to identify brain regions supporting detection of phonological versus semantic errors in a group of 76 individuals with aphasia. The results revealed that motor speech deficits as well as lesions to the ventral motor cortex were related to reduced detection of phonological errors relative to semantic errors. Detection of semantic errors selectively related to auditory word comprehension deficits. Across all error types, poor cognitive control related to reduced detection. We conclude that monitoring of phonological and semantic errors relies on distinct cognitive abilities and brain regions. Furthermore, we identified cognitive control as a shared cognitive basis for monitoring all types of speech errors. These findings refine and expand our understanding of the neurocognitive basis of speech error monitoring.
Collapse
Affiliation(s)
- Joshua D McCall
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
| | - Andrew T DeMarco
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC
| | - Ayan S Mandal
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Brain-Gene Development Lab, Psychiatry Department, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA
| | - Mackenzie E Fama
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Department of Speech, Language, and Hearing Sciences, The George Washington University, Washington, DC
| | - Candace M van der Stelt
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Research Division, MedStar National Rehabilitation Hospital, Washington, DC
| | - Elizabeth H Lacey
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Research Division, MedStar National Rehabilitation Hospital, Washington, DC
| | - Alycia B Laks
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
| | - Sarah F Snider
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center, Washington, DC
| | - Rhonda B Friedman
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center, Washington, DC
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC
- Research Division, MedStar National Rehabilitation Hospital, Washington, DC
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center, Washington, DC
| |
Collapse
|
8
|
Caldwell-Harris CL, MacWhinney B. Age effects in second language acquisition: Expanding the emergentist account. BRAIN AND LANGUAGE 2023; 241:105269. [PMID: 37150139 DOI: 10.1016/j.bandl.2023.105269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 04/05/2023] [Accepted: 04/17/2023] [Indexed: 05/09/2023]
Abstract
In 2005, Science magazine designated the problem of accounting for difficulties in L2 (second language) learning as one of the 125 outstanding challenges facing scientific research. A maturationally-based sensitive period has long been the favorite explanation for why ultimate foreign language attainment declines with age-of-acquisition. However, no genetic or neurobiological mechanisms for limiting language learning have yet been identified. At the same time, we know that cognitive, social, and motivational factors change in complex ways across the human lifespan. Emergentist theory provides a framework for relating these changes to variation in the success of L2 learning. The great variability in patterns of learning, attainment, and loss across ages, social groups, and linguistic levels provides the core motivation for the emergentist approach. Our synthesis incorporates three groups of factors which change systematically with age: environmental supports, cognitive abilities, and motivation for language learning. This extended emergentist account explains why and when second language succeeds for some children and adults and fails for others.
Collapse
|
9
|
Hu J, Small H, Kean H, Takahashi A, Zekelman L, Kleinman D, Ryan E, Nieto-Castañón A, Ferreira V, Fedorenko E. Precision fMRI reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cereb Cortex 2023; 33:4384-4404. [PMID: 36130104 PMCID: PMC10110436 DOI: 10.1093/cercor/bhac350] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
A fronto-temporal brain network has long been implicated in language comprehension. However, this network's role in language production remains debated. In particular, it remains unclear whether all or only some language regions contribute to production, and which aspects of production these regions support. Across 3 functional magnetic resonance imaging experiments that rely on robust individual-subject analyses, we characterize the language network's response to high-level production demands. We report 3 novel results. First, sentence production, spoken or typed, elicits a strong response throughout the language network. Second, the language network responds to both phrase-structure building and lexical access demands, although the response to phrase-structure building is stronger and more spatially extensive, present in every language region. Finally, contra some proposals, we find no evidence of brain regions-within or outside the language network-that selectively support phrase-structure building in production relative to comprehension. Instead, all language regions respond more strongly during production than comprehension, suggesting that production incurs a greater cost for the language network. Together, these results align with the idea that language comprehension and production draw on the same knowledge representations, which are stored in a distributed manner within the language-selective network and are used to both interpret and generate linguistic utterances.
Collapse
Affiliation(s)
- Jennifer Hu
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Atsushi Takahashi
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Leo Zekelman
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | | | - Elizabeth Ryan
- St. George’s Medical School, St. George’s University, Grenada, West Indies
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215, United States
| | - Victor Ferreira
- Department of Psychology, UCSD, La Jolla, CA 92093, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
10
|
Shekari E, Nozari N. A narrative review of the anatomy and function of the white matter tracts in language production and comprehension. Front Hum Neurosci 2023; 17:1139292. [PMID: 37051488 PMCID: PMC10083342 DOI: 10.3389/fnhum.2023.1139292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 02/24/2023] [Indexed: 03/28/2023] Open
Abstract
Much is known about the role of cortical areas in language processing. The shift towards network approaches in recent years has highlighted the importance of uncovering the role of white matter in connecting these areas. However, despite a large body of research, many of these tracts' functions are not well-understood. We present a comprehensive review of the empirical evidence on the role of eight major tracts that are hypothesized to be involved in language processing (inferior longitudinal fasciculus, inferior fronto-occipital fasciculus, uncinate fasciculus, extreme capsule, middle longitudinal fasciculus, superior longitudinal fasciculus, arcuate fasciculus, and frontal aslant tract). For each tract, we hypothesize its role based on the function of the cortical regions it connects. We then evaluate these hypotheses with data from three sources: studies in neurotypical individuals, neuropsychological data, and intraoperative stimulation studies. Finally, we summarize the conclusions supported by the data and highlight the areas needing further investigation.
Collapse
Affiliation(s)
- Ehsan Shekari
- Department of Neuroscience, Iran University of Medical Sciences, Tehran, Iran
| | - Nazbanou Nozari
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
- Center for the Neural Basis of Cognition (CNBC), Pittsburgh, PA, United States
| |
Collapse
|
11
|
Teghipco A, Okada K, Murphy E, Hickok G. Predictive Coding and Internal Error Correction in Speech Production. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:81-119. [PMID: 37229143 PMCID: PMC10205072 DOI: 10.1162/nol_a_00088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 11/02/2022] [Indexed: 05/27/2023]
Abstract
Speech production involves the careful orchestration of sophisticated systems, yet overt speech errors rarely occur under naturalistic conditions. The present functional magnetic resonance imaging study sought neural evidence for internal error detection and correction by leveraging a tongue twister paradigm that induces the potential for speech errors while excluding any overt errors from analysis. Previous work using the same paradigm in the context of silently articulated and imagined speech production tasks has demonstrated forward predictive signals in auditory cortex during speech and presented suggestive evidence of internal error correction in left posterior middle temporal gyrus (pMTG) on the basis that this area tended toward showing a stronger response when potential speech errors are biased toward nonwords compared to words (Okada et al., 2018). The present study built on this prior work by attempting to replicate the forward prediction and lexicality effects in nearly twice as many participants but introduced novel stimuli designed to further tax internal error correction and detection mechanisms by biasing speech errors toward taboo words. The forward prediction effect was replicated. While no evidence was found for a significant difference in brain response as a function of lexical status of the potential speech error, biasing potential errors toward taboo words elicited significantly greater response in left pMTG than biasing errors toward (neutral) words. Other brain areas showed preferential response for taboo words as well but responded below baseline and were less likely to reflect language processing as indicated by a decoding analysis, implicating left pMTG in internal error correction.
Collapse
Affiliation(s)
- Alex Teghipco
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| | - Kayoko Okada
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, USA
| | - Emma Murphy
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| |
Collapse
|
12
|
Nunn K, Vallila-Rohter S, Middleton EL. Errorless, Errorful, and Retrieval Practice for Naming Treatment in Aphasia: A Scoping Review of Learning Mechanisms and Treatment Ingredients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:668-687. [PMID: 36729701 PMCID: PMC10023178 DOI: 10.1044/2022_jslhr-22-00251] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/27/2022] [Accepted: 10/25/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE Increasingly, mechanisms of learning are being considered during aphasia rehabilitation. Well-characterized learning mechanisms can inform "how" interventions should be administered to maximize the acquisition and retention of treatment gains. This systematic scoping review mapped hypothesized mechanisms of action (MoAs) and treatment ingredients in three learning-based approaches targeting naming in aphasia: errorless learning (ELess), errorful learning (EFul), and retrieval practice (RP). The rehabilitation treatment specification system was leveraged to describe available literature and identify knowledge gaps within a unified framework. METHOD PubMed and CINHAL were searched for studies that compared ELess, EFul, and/or RP for naming in aphasia. Independent reviewers extracted data on proposed MoAs, treatment ingredients, and outcomes. RESULTS Twelve studies compared ELess and EFul, six studies compared ELess and RP, and one study compared RP and EFul. Hebbian learning, gated Hebbian learning, effortful retrieval, and models of incremental learning via lexical access were proposed as MoAs. To maximize treatment outcomes within theorized MoAs, researchers manipulated study ingredients including cues, scheduling, and feedback. Outcomes in comparative effectiveness studies were examined to identify ingredients that may influence learning. Individual-level variables, such as cognitive and linguistic abilities, may affect treatment response; however, findings were inconsistent across studies. CONCLUSIONS Significant knowledge gaps were identified and include (a) which MoAs operate during ELess, EFul, and RP; (b) which ingredients are active and engage specific MoAs; and (c) how individual-level variables may drive treatment administration. Theory-driven research can support or refute MoAs and active ingredients enabling clinicians to modify treatments within theoretical frameworks.
Collapse
Affiliation(s)
- Kristen Nunn
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Sofia Vallila-Rohter
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA
| | - Erica L. Middleton
- Research Department, Moss Rehabilitation Research Institute, Elkins Park, PA
| |
Collapse
|
13
|
Weiss AR, Korzeniewska A, Chrabaszcz A, Bush A, Fiez JA, Crone NE, Richardson RM. Lexicality-Modulated Influence of Auditory Cortex on Subthalamic Nucleus During Motor Planning for Speech. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:53-80. [PMID: 37229140 PMCID: PMC10205077 DOI: 10.1162/nol_a_00086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/18/2022] [Indexed: 05/27/2023]
Abstract
Speech requires successful information transfer within cortical-basal ganglia loop circuits to produce the desired acoustic output. For this reason, up to 90% of Parkinson's disease patients experience impairments of speech articulation. Deep brain stimulation (DBS) is highly effective in controlling the symptoms of Parkinson's disease, sometimes alongside speech improvement, but subthalamic nucleus (STN) DBS can also lead to decreases in semantic and phonological fluency. This paradox demands better understanding of the interactions between the cortical speech network and the STN, which can be investigated with intracranial EEG recordings collected during DBS implantation surgery. We analyzed the propagation of high-gamma activity between STN, superior temporal gyrus (STG), and ventral sensorimotor cortices during reading aloud via event-related causality, a method that estimates strengths and directionalities of neural activity propagation. We employed a newly developed bivariate smoothing model based on a two-dimensional moving average, which is optimal for reducing random noise while retaining a sharp step response, to ensure precise embedding of statistical significance in the time-frequency space. Sustained and reciprocal neural interactions between STN and ventral sensorimotor cortex were observed. Moreover, high-gamma activity propagated from the STG to the STN prior to speech onset. The strength of this influence was affected by the lexical status of the utterance, with increased activity propagation during word versus pseudoword reading. These unique data suggest a potential role for the STN in the feedforward control of speech.
Collapse
Affiliation(s)
- Alexander R. Weiss
- JHU Cognitive Neurophysiology and BMI Lab, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Anna Korzeniewska
- JHU Cognitive Neurophysiology and BMI Lab, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Anna Chrabaszcz
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Alan Bush
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Julie A. Fiez
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Brain Institute, Pittsburgh, PA, USA
| | - Nathan E. Crone
- JHU Cognitive Neurophysiology and BMI Lab, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Robert M. Richardson
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
14
|
Volfart A, McMahon KL, Howard D, de Zubicaray GI. Neural Correlates of Naturally Occurring Speech Errors during Picture Naming in Healthy Participants. J Cogn Neurosci 2022; 35:111-127. [PMID: 36306259 DOI: 10.1162/jocn_a_01927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Most of our knowledge about the neuroanatomy of speech errors comes from lesion-symptom mapping studies in people with aphasia and laboratory paradigms designed to elicit primarily phonological errors in healthy adults, with comparatively little evidence from naturally occurring speech errors. In this study, we analyzed perfusion fMRI data from 24 healthy participants during a picture naming task, classifying their responses into correct and different speech error types (e.g., semantic, phonological, omission errors). Total speech errors engaged a wide set of left-lateralized frontal, parietal, and temporal regions that were almost identical to those involved during the production of correct responses. We observed significant perfusion signal decreases in the left posterior middle temporal gyrus and inferior parietal lobule (angular gyrus) for semantic errors compared to correct trials matched on various psycholinguistic variables. In addition, the left dorsal caudate nucleus showed a significant perfusion signal decrease for omission (i.e., anomic) errors compared with matched correct trials. Surprisingly, we did not observe any significant perfusion signal changes in brain regions proposed to be associated with monitoring mechanisms during speech production (e.g., ACC, superior temporal gyrus). Overall, our findings provide evidence for distinct neural correlates of semantic and omission error types, with anomic speech errors likely resulting from failures to initiate articulatory-motor processes rather than semantic knowledge impairments as often reported for people with aphasia.
Collapse
Affiliation(s)
| | - Katie L McMahon
- Queensland University of Technology.,Royal Brisbane & Women's Hospital
| | | | | |
Collapse
|
15
|
Schnur TT, Lei CM. Assessing naming errors using an automated machine learning approach. Neuropsychology 2022; 36:709-718. [PMID: 36107705 PMCID: PMC9970144 DOI: 10.1037/neu0000860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
OBJECTIVE After left hemisphere stroke, 20%-50% of people experience language deficits, including difficulties in naming. Naming errors that are semantically related to the intended target (e.g., producing "violin" for picture HARP) indicate a potential impairment in accessing knowledge of word forms and their meanings. Understanding the cause of naming impairments is crucial to better modeling of language production as well as for tailoring individualized rehabilitation. However, evaluation of naming errors is typically by subjective and laborious dichotomous classification. As a result, these evaluations do not capture the degree of semantic similarity and are susceptible to lower interrater reliability because of subjectivity. METHOD We investigated whether a computational linguistic measure using word2vec (Mikolov, Chen, et al., 2013) addressed these limitations by evaluating errors during object naming in a group of patients during the acute stage of a left-hemisphere stroke (N = 105). RESULTS Pearson correlations demonstrated excellent convergent validity of word2vec's semantically related estimates of naming errors and independent tests of access to lexical-semantic knowledge (p < .0001). Further, multiple regression analysis showed word2vec's semantically related estimates were significantly better than human error classification at predicting performance on tests of lexical-semantic knowledge. CONCLUSIONS Useful to both theorists and clinicians, our word2vec-based method provides an automated, continuous, and objective psychometric measure of access to lexical-semantic knowledge during naming. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Tatiana T. Schnur
- Departments of Neurosurgery & Neuroscience, Baylor College of Medicine
| | - Chia-Ming Lei
- Department of Communication Sciences & Disorders, Radford University
| |
Collapse
|
16
|
Kompa NA, Mueller JL. Inner speech as a cognitive tool—or what is the point of talking to oneself? PHILOSOPHICAL PSYCHOLOGY 2022. [DOI: 10.1080/09515089.2022.2112164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Nikola A. Kompa
- Institute of Philosophy, University of Osnabrück, Osnabrück, Germany
| | | |
Collapse
|
17
|
Middleton EL, Schwartz MF, Dell GS, Brecher A. Learning from errors: Exploration of the monitoring learning effect. Cognition 2022; 224:105057. [PMID: 35218984 PMCID: PMC9086111 DOI: 10.1016/j.cognition.2022.105057] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 12/28/2021] [Accepted: 02/03/2022] [Indexed: 11/19/2022]
Abstract
The present study examined spontaneous detection and repair of naming errors in people with aphasia to advance a theoretical understanding of how monitoring impacts learning in lexical access. Prior work in aphasia has found that spontaneous repair, but not mere detection without repair, of semantic naming errors leads to improved naming on those same items in the future when other factors are accounted for. The present study sought to replicate this finding in a new, larger sample of participants and to examine the critical role of self-generated repair in this monitoring learning effect. Twenty-four participants with chronic aphasia with naming impairment provided naming responses to a 660-item corpus of common, everyday objects at two timepoints. At the first timepoint, a randomly selected subset of trials ended in experimenter-provided corrective feedback. Each naming trial was coded for accuracy, error type, and for any monitoring behavior that occurred, specifically detection with repair (i.e., correction), detection without repair, and no detection. Focusing on semantic errors, the original monitoring learning effect was replicated, with enhanced accuracy at a future timepoint when the first trial with that item involved detection with repair, compared to error trials that were not detected. This enhanced accuracy resulted from learning that arose from the first trial rather than the presence of repair simply signifying easier items. A second analysis compared learning from trials of self-corrected errors to that of trials ending in feedback that were detected but not self-corrected and found enhanced learning after self-generated repair. Implications for theories of lexical access and monitoring are discussed.
Collapse
Affiliation(s)
- Erica L Middleton
- Moss Rehabilitation Research Institute, 50 Township Line Rd, Elkins Park, PA 19027, USA.
| | - Myrna F Schwartz
- Moss Rehabilitation Research Institute, 50 Township Line Rd, Elkins Park, PA 19027, USA.
| | - Gary S Dell
- Department of Psychology, University of Illinois, Urbana-Champaign, 603 E. Daniel St, Champaign, IL 61820, USA.
| | - Adelyn Brecher
- Moss Rehabilitation Research Institute, 50 Township Line Rd, Elkins Park, PA 19027, USA.
| |
Collapse
|
18
|
Brothers T, Zeitlin M, Perrachione AC, Choi C, Kuperberg G. Domain-general conflict monitoring predicts neural and behavioral indices of linguistic error processing during reading comprehension. J Exp Psychol Gen 2022; 151:1502-1519. [PMID: 34843366 PMCID: PMC9888606 DOI: 10.1037/xge0001130] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
The ability to detect and respond to linguistic errors is critical for successful reading comprehension, but these skills can vary considerably across readers. In the current study, healthy adults (age 18-35) read short discourse scenarios for comprehension while monitoring for the presence of semantic anomalies. Using a factor analytic approach, we examined if performance in nonlinguistic conflict monitoring tasks (Stroop, AX-CPT) would predict individual differences in neural and behavioral measures of linguistic error processing. Consistent with this hypothesis, domain-general conflict monitoring predicted both readers' end-of-trial acceptability judgments and the amplitude of a late neural response (the P600) evoked by linguistic anomalies. The influence on the P600 was nonlinear, suggesting that online neural responses to linguistic errors are influenced by both the effectiveness and efficiency of domain-general conflict monitoring. These relationships were also highly specific and remained after controlling for variability in working memory capacity and verbal knowledge. Finally, we found that domain-general conflict monitoring also predicted individual variability in measures of reading comprehension, and that this relationship was partially mediated by behavioral measures of linguistic error detection. These findings inform our understanding of the role of domain-general executive functions in reading comprehension, with potential implications for the diagnosis and treatment of language impairments. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Trevor Brothers
- Tufts University
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | | | | | | | - Gina Kuperberg
- Tufts University
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| |
Collapse
|
19
|
Pickering MJ, McLean JF, Gambi C. Interference in the shared-Stroop task: a comparison of self- and other-monitoring. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220107. [PMID: 35601453 PMCID: PMC9043706 DOI: 10.1098/rsos.220107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 03/29/2022] [Indexed: 06/15/2023]
Abstract
Co-actors represent and integrate each other's actions, even when they need not monitor one another. However, monitoring is important for successful interactions, particularly those involving language, and monitoring others' utterances probably relies on similar mechanisms as monitoring one's own. We investigated the effect of monitoring on the integration of self- and other-generated utterances in the shared-Stroop task. In a solo version of the Stroop task (with a single participant responding to all stimuli; Experiment 1), participants named the ink colour of mismatching colour words (incongruent stimuli) more slowly than matching colour words (congruent). In the shared-Stroop task, one participant named the ink colour of words in one colour (e.g. red), while ignoring stimuli in the other colour (e.g. green); the other participant either named the other ink colour or did not respond. Crucially, participants either provided feedback about the correctness of their partner's response (Experiment 3) or did not (Experiment 2). Interference was greater when both participants responded than when they did not, but only when their partners provided feedback. We argue that feedback increased interference because monitoring one's partner enhanced representations of the partner's target utterance, which in turn interfered with self-monitoring of the participant's own utterance.
Collapse
Affiliation(s)
- Martin J. Pickering
- Department of Psychology, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, Scotland, UK
| | - Janet F. McLean
- School of Applied Sciences, Abertay University, Dundee DD1 1HG, Scotland, UK
| | - Chiara Gambi
- School of Psychology, Cardiff University, 70 Park Place, Cardiff CF10 3AT, Wales, UK
| |
Collapse
|
20
|
Kapatsinski V. Morphology in a Parallel, Distributed, Interactive Architecture of Language Production. Front Artif Intell 2022; 5:803259. [PMID: 35310958 PMCID: PMC8927966 DOI: 10.3389/frai.2022.803259] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 02/07/2022] [Indexed: 11/25/2022] Open
Abstract
How do speakers produce novel words? This programmatic paper synthesizes research in linguistics and neuroscience to argue for a parallel distributed architecture of the language system, in which distributed semantic representations activate competing form chunks in parallel. This process accounts for both the synchronic phenomenon of paradigm uniformity and the diachronic process of paradigm leveling; i.e., the shaping or reshaping of relatively infrequent forms by semantically-related forms of higher frequency. However, it also raises the question of how leveling is avoided. A negative feedback cycle is argued to be responsible. The negative feedback cycle suppresses activated form chunks with unintended semantics or connotations and allows the speaker to decide when to begin speaking. The negative feedback cycle explains away much of the evidence for paradigmatic mappings, allowing more of the grammar to be described with only direct form-meaning mappings/constructions. However, there remains an important residue of cases for which paradigmatic mappings are necessary. I show that these cases can be accounted for by spreading activation down paradigmatic associations as the source of the activation is being inhibited by negative feedback. The negative feedback cycle provides a mechanistic explanation for several phenomena in language change that have so far eluded usage-based accounts. In particular, it provides a mechanism for degrammaticalization and affix liberation (e.g., the detachment of -holic from the context(s) in which it occurs), explaining how chunks can gain productivity despite occurring in a single fixed context. It also provides a novel perspective on paradigm gaps. Directions for future work are outlined.
Collapse
|
21
|
Akhavan N, Sen C, Baker C, Abbott N, Gravier M, Love T. Effect of Lexical-Semantic Cues during Real-Time Sentence Processing in Aphasia. Brain Sci 2022; 12:312. [PMID: 35326268 PMCID: PMC8946627 DOI: 10.3390/brainsci12030312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 02/13/2022] [Accepted: 02/22/2022] [Indexed: 02/01/2023] Open
Abstract
Using a visual world eye-tracking paradigm, we investigated the real-time auditory sentence processing of neurologically unimpaired listeners and individuals with aphasia. We examined whether lexical-semantic cues provided as adjectives of a target noun modulate the encoding and retrieval dynamics of a noun phrase during the processing of complex, non-canonical sentences. We hypothesized that the real-time processing pattern of sentences containing a semantically biased lexical cue (e.g., the venomous snake) would be different than sentences containing unbiased adjectives (e.g., the voracious snake). More specifically, we predicted that the presence of a biased lexical cue would facilitate (1) lexical encoding (i.e., boosted lexical access) of the target noun, snake, and (2) on-time syntactic retrieval or dependency linking (i.e., increasing the probability of on-time lexical retrieval at post-verb gap site) for both groups. For unimpaired listeners, results revealed a difference in the time course of gaze trajectories to the target noun (snake) during lexical encoding and syntactic retrieval in the biased compared to the unbiased condition. In contrast, for the aphasia group, the presence of biased adjectives did not affect the time course of processing the target noun. Yet, at the post-verb gap site, the presence of a semantically biased adjective influenced syntactic re-activation. Our results extend the cue-based parsing model by offering new and valuable insights into the processes underlying sentence comprehension of individuals with aphasia.
Collapse
Affiliation(s)
- Niloofar Akhavan
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA; (C.S.); (C.B.); (N.A.); (T.L.)
- Department of Cognitive Science, University of California San Diego, San Diego, CA 92122, USA
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University/University of California San Diego, San Diego, CA 92182, USA
| | - Christina Sen
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA; (C.S.); (C.B.); (N.A.); (T.L.)
- Department of Cognitive Science, University of California San Diego, San Diego, CA 92122, USA
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University/University of California San Diego, San Diego, CA 92182, USA
| | - Carolyn Baker
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA; (C.S.); (C.B.); (N.A.); (T.L.)
- Department of Cognitive Science, University of California San Diego, San Diego, CA 92122, USA
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University/University of California San Diego, San Diego, CA 92182, USA
| | - Noelle Abbott
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA; (C.S.); (C.B.); (N.A.); (T.L.)
- Department of Cognitive Science, University of California San Diego, San Diego, CA 92122, USA
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University/University of California San Diego, San Diego, CA 92182, USA
| | - Michelle Gravier
- Department of Speech, Language and Hearing Sciences, California State University East Bay, Hayward, CA 94542, USA;
| | - Tracy Love
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA 92182, USA; (C.S.); (C.B.); (N.A.); (T.L.)
- Department of Cognitive Science, University of California San Diego, San Diego, CA 92122, USA
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University/University of California San Diego, San Diego, CA 92182, USA
| |
Collapse
|
22
|
Correction Without Consciousness in Complex Tasks: Evidence from Typing. J Cogn 2022; 5:11. [PMID: 35083414 PMCID: PMC8740635 DOI: 10.5334/joc.202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 11/26/2021] [Indexed: 11/24/2022] Open
Abstract
It has been demonstrated that with practice, complex tasks can become independent of conscious control, but even in those cases, repairing errors is thought to remain dependent on conscious control. This paper reports two studies probing conscious awareness over repairs in nearly 15,000 typing errors collected from 145 participants in a single-word typing-to-dictation task. We provide evidence for subconscious repairs by ruling out alternative accounts, and report two sets of analyses showing that a) such repairs are not confined to a specific stage of processing and b) that they are sensitive to the final outcome of repair. A third set of analyses provides a detailed comparison of the timeline of trials with conscious and subconscious repairs, revealing that the difference is confined to the repair process itself. We propose an account of repair processing that accommodates these empirical findings.
Collapse
|
23
|
McCall JD, Vivian Dickens J, Mandal AS, DeMarco AT, Fama ME, Lacey EH, Kelkar A, Medaglia JD, Turkeltaub PE. Structural disconnection of the posterior medial frontal cortex reduces speech error monitoring. Neuroimage Clin 2022; 33:102934. [PMID: 34995870 PMCID: PMC8739872 DOI: 10.1016/j.nicl.2021.102934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 11/25/2021] [Accepted: 12/31/2021] [Indexed: 11/29/2022]
Abstract
Optimal performance in any task relies on the ability to detect and correct errors. The anterior cingulate cortex and the broader posterior medial frontal cortex (pMFC) are active during error processing. However, it is unclear whether damage to the pMFC impairs error monitoring. We hypothesized that successful error monitoring critically relies on connections between the pMFC and broader cortical networks involved in executive functions and the task being monitored. We tested this hypothesis in the context of speech error monitoring in people with post-stroke aphasia. Diffusion weighted images were collected in 51 adults with chronic left-hemisphere stroke and 37 age-matched control participants. Whole-brain connectomes were derived using constrained spherical deconvolution and anatomically-constrained probabilistic tractography. Support vector regressions identified white matter connections in which lost integrity in stroke survivors related to reduced error detection during confrontation naming. Lesioned connections to the bilateral pMFC were related to reduce error monitoring, including many connections to regions associated with speech production and executive function. We conclude that connections to the pMFC support error monitoring. Error monitoring in speech production is supported by the structural connectivity between the pMFC and regions involved in speech production, comprehension, and executive function. Interactions between pMFC and other task-relevant processors may similarly be critical for error monitoring in other task contexts.
Collapse
Affiliation(s)
- Joshua D McCall
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA
| | - J Vivian Dickens
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA
| | - Ayan S Mandal
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Psychiatry Department, University of Cambridge, Cambridge CB2 1TN, UK
| | - Andrew T DeMarco
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC 20007, USA
| | - Mackenzie E Fama
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Department of Speech, Language, and Hearing Sciences, The George Washington University, DC 20052, USA
| | - Elizabeth H Lacey
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA
| | - Apoorva Kelkar
- Psychology Department, Drexel University, Philadelphia, PA 19104, USA
| | - John D Medaglia
- Psychology Department, Drexel University, Philadelphia, PA 19104, USA; Neurology Department, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA; Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC 20007, USA.
| |
Collapse
|
24
|
Avoiding gender ambiguous pronouns in French. Cognition 2021; 218:104909. [PMID: 34649089 DOI: 10.1016/j.cognition.2021.104909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 08/09/2021] [Accepted: 09/13/2021] [Indexed: 11/22/2022]
Abstract
Across many languages, pronouns are the most frequently produced referring expressions. We examined whether and how speakers avoid referential ambiguity that arises when the gender of a pronoun is compatible with more than one entity in the context in French. Experiment 1 showed that speakers use fewer pronouns when human referents have the same gender than when they had different genders, but grammatical gender congruence between inanimate referents did not result in fewer pronouns. Experiment 2 showed that semantic similarity between non-human referents can enhance the likelihood that speakers avoid grammatical-gender ambiguous pronouns. Experiment 3 pitched grammatical gender ambiguity avoidance against the referents' competition in the non-linguistic context, showing that when speakers can base their pronoun choice on non-linguistic competition, they ignore the pronoun's grammatical gender ambiguity even when the referents are semantically related. The results thus indicated that speakers preferentially produce referring expressions based on non-linguistic information; they are more likely to be affected by the referents' non-linguistic similarity than by the linguistic ambiguity of a pronoun.
Collapse
|
25
|
van der Stelt CM, Fama ME, Mccall JD, Snider SF, Turkeltaub PE. Intellectual awareness of naming abilities in people with chronic post-stroke aphasia. Neuropsychologia 2021; 160:107961. [PMID: 34274379 PMCID: PMC8405585 DOI: 10.1016/j.neuropsychologia.2021.107961] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 07/02/2021] [Accepted: 07/13/2021] [Indexed: 11/21/2022]
Abstract
Anosognosia, or lack of self-awareness, is often present following neurological injury and can result in poor functional outcomes. The specific phenomenon of intellectual awareness, the knowledge that a function is impaired in oneself, has not been widely studied in post-stroke aphasia. We aim to identify behavioral and neural correlates of intellectual awareness by comparing stroke survivors' self-reports of anomia to objective naming performance and examining lesion sites. Fifty-three participants with chronic aphasia without severe comprehension deficits rated their naming ability and completed a battery of behavioral tests. We calculated the reliability and accuracy of participant self-ratings, then examined the relationship of poor intellectual awareness to speech, language, and cognitive measures. We used support vector regression lesion-symptom mapping (SVR-LSM) to determine lesion locations associated with impaired and preserved intellectual awareness. Reliability and accuracy of self-ratings varied across the participants. Poor intellectual awareness was associated with reduced performance on tasks that rely on semantics. Our SVR-LSM results demonstrated that anterior inferior frontal lesions were associated with poor awareness, while mid-superior temporal lesions were associated with preserved awareness. An anterior-posterior gradient was evident in the unthresholded lesion-symptom maps. While many people with chronic aphasia and relatively intact comprehension can accurately and reliably report the severity of their anomia, others overestimate, underestimate, or inconsistently estimate their naming abilities. Clinicians should consider this when administering self-rating scales, particularly when semantic deficits or anterior inferior frontal lesions are present. Administering self-ratings on multiple days may be useful to check the reliability of patient perceptions.
Collapse
Affiliation(s)
- Candace M van der Stelt
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, USA; Department of Neurology, Georgetown University Medical Center, USA; Research Division, MedStar National Rehabilitation Hospital, USA
| | - Mackenzie E Fama
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, USA; Department of Neurology, Georgetown University Medical Center, USA; Department of Speech, Language and Hearing Sciences, George Washington University, USA
| | - Joshua D Mccall
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, USA; Department of Neurology, Georgetown University Medical Center, USA
| | - Sarah F Snider
- Department of Neurology, Georgetown University Medical Center, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, USA; Department of Neurology, Georgetown University Medical Center, USA; Research Division, MedStar National Rehabilitation Hospital, USA.
| |
Collapse
|
26
|
Ghaleh M, Lacey EH, Fama ME, Anbari Z, DeMarco AT, Turkeltaub PE. Dissociable Mechanisms of Verbal Working Memory Revealed through Multivariate Lesion Mapping. Cereb Cortex 2021; 30:2542-2554. [PMID: 31701121 DOI: 10.1093/cercor/bhz259] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Two maintenance mechanisms with separate neural systems have been suggested for verbal working memory: articulatory-rehearsal and non-articulatory maintenance. Although lesion data would be key to understanding the essential neural substrates of these systems, there is little evidence from lesion studies that the two proposed mechanisms crucially rely on different neuroanatomical substrates. We examined 39 healthy adults and 71 individuals with chronic left-hemisphere stroke to determine if verbal working memory tasks with varying demands would rely on dissociable brain structures. Multivariate lesion-symptom mapping was used to identify the brain regions involved in each task, controlling for spatial working memory scores. Maintenance of verbal information relied on distinct brain regions depending on task demands: sensorimotor cortex under higher demands and superior temporal gyrus (STG) under lower demands. Inferior parietal cortex and posterior STG were involved under both low and high demands. These results suggest that maintenance of auditory information preferentially relies on auditory-phonological storage in the STG via a nonarticulatory maintenance when demands are low. Under higher demands, sensorimotor regions are crucial for the articulatory rehearsal process, which reduces the reliance on STG for maintenance. Lesions to either of these regions impair maintenance of verbal information preferentially under the appropriate task conditions.
Collapse
Affiliation(s)
- Maryam Ghaleh
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
| | - Elizabeth H Lacey
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA.,Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA
| | - Mackenzie E Fama
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA.,Department of Speech-Language Pathology and Audiology, Towson University, Towson, MD 21252, USA
| | - Zainab Anbari
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
| | - Andrew T DeMarco
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
| | - Peter E Turkeltaub
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA.,Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA
| |
Collapse
|
27
|
Runnqvist E, Chanoine V, Strijkers K, Pattamadilok C, Bonnard M, Nazarian B, Sein J, Anton JL, Dorokhova L, Belin P, Alario FX. Cerebellar and Cortical Correlates of Internal and External Speech Error Monitoring. Cereb Cortex Commun 2021; 2:tgab038. [PMID: 34296182 PMCID: PMC8237718 DOI: 10.1093/texcom/tgab038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/26/2021] [Accepted: 05/26/2021] [Indexed: 11/12/2022] Open
Abstract
An event-related functional magnetic resonance imaging study examined how speakers inspect their own speech for errors. Concretely, we sought to assess 1) the role of the temporal cortex in monitoring speech errors, linked with comprehension-based monitoring; 2) the involvement of the cerebellum in internal and external monitoring, linked with forward modeling; and 3) the role of the medial frontal cortex for internal monitoring, linked with conflict-based monitoring. In a word production task priming speech errors, we observed enhanced involvement of the right posterior cerebellum for trials that were correct, but on which participants were more likely to make a word as compared with a nonword error (contrast of internal monitoring). Furthermore, comparing errors to correct utterances (contrast of external monitoring), we observed increased activation of the same cerebellar region, of the superior medial cerebellum, and of regions in temporal and medial frontal cortex. The presence of the cerebellum for both internal and external monitoring indicates the use of forward modeling across the planning and articulation of speech. Dissociations across internal and external monitoring in temporal and medial frontal cortex indicate that monitoring of overt errors is more reliant on vocal feedback control.
Collapse
Affiliation(s)
- Elin Runnqvist
- Aix-Marseille Université, CNRS, LPL, Aix-en-Provence 13100, France
| | - Valérie Chanoine
- Aix-Marseille Université, CNRS, LPL, Aix-en-Provence 13100, France
- Institute of Language, Communication and the Brain, Aix-en-Provence 13100, France
| | | | | | | | - Bruno Nazarian
- Centre IRM, Marseille 13005, France
- Aix-Marseille Université, CNRS, INT 13005, Marseille, France
| | - Julien Sein
- Centre IRM, Marseille 13005, France
- Aix-Marseille Université, CNRS, INT 13005, Marseille, France
| | - Jean-Luc Anton
- Centre IRM, Marseille 13005, France
- Aix-Marseille Université, CNRS, INT 13005, Marseille, France
| | - Lydia Dorokhova
- Aix-Marseille Université, CNRS, LPL, Aix-en-Provence 13100, France
| | - Pascal Belin
- Aix-Marseille Université, CNRS, INT 13005, Marseille, France
| | | |
Collapse
|
28
|
Ries SK, Pinet S, Nozari NB, Knight RT. Characterizing multi-word speech production using event-related potentials. Psychophysiology 2021; 58:e13788. [PMID: 33569829 PMCID: PMC8193832 DOI: 10.1111/psyp.13788] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 11/25/2020] [Accepted: 01/19/2021] [Indexed: 11/30/2022]
Abstract
Event-related potentials (ERPs) derived from electroencephalography (EEG) have proven useful for understanding linguistic processes during language perception and production. Words are commonly produced in sequences, yet most ERP studies have used single-word experimental designs. Single-word designs reduce potential ERP overlap in word sequence production. However, word sequence production engages brain mechanisms in different ways than single word production. In particular, speech monitoring and planning mechanisms are more engaged than for single words since several words must be produced in a short period of time. This study evaluates the feasibility of recording ERP components in the context of word sequence production, and whether separate components could be isolated for each word. Scalp EEG data were acquired, while participants recited word sequences from memory at a regular pace, using a tongue-twister paradigm. The results revealed fronto-central error-related negativity, previously associated with speech monitoring, which could be distinguished for each word. Its peak amplitude was sensitive to Cycle and Phonological Similarity. However, an effect of sequential production was also observable on baseline measures, indicating baseline shifts throughout the word sequence due to concurrent sustained medial-frontal EEG activity. We also report a late left anterior negativity (LLAN), associated with verbal response planning and execution, onsetting around 100 ms before the first word in each cycle and sustained throughout the rest of the cycle. This work underlines the importance of considering the contribution of transient and sustained EEG activity on ERPs, and provides evidence that ERPs can be used to study sequential word production.
Collapse
Affiliation(s)
- Stephanie K Ries
- School of Speech, Language, and Hearing Sciences, Center for Clinical and Cognitive Neuroscience, San Diego State University, SDSU-UCSD Joint Doctoral Program in Language and Communicative Disorders, San Diego, CA, USA
| | - Svetlana Pinet
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - N Bonnie Nozari
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
| |
Collapse
|
29
|
Gollan TH, Smirnov DS, Salmon DP, Galasko D. Failure to stop autocorrect errors in reading aloud increases in aging especially with a positive biomarker for Alzheimer's disease. Psychol Aging 2020; 35:1016-1025. [PMID: 32584071 PMCID: PMC8357184 DOI: 10.1037/pag0000550] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
The present study examined the effects of aging and CSF biomarkers of Alzheimer's disease (AD) on the ability to control production of unexpected words in connected speech elicited by reading aloud. Fifty-two cognitively healthy participants aged 66-86 read aloud 6 paragraphs with 10 malapropisms including 5 on content words (e.g., "window cartons" that elicited autocorrect errors to "window curtains") and 5 on function words (e.g., "thus concept" that elicited autocorrections to "this concept") and completed a battery of neuropsychological tests including a standardized Stroop task. Reading aloud elicited more autocorrect errors on function than content words, but these were equally correlated with age and Aβ1-42 levels. The ability to stop autocorrect errors declined in aging and with lower (more AD-like) levels of Aβ1-42, and multiplicatively so, such that autocorrect errors were highest in the oldest-old with the lowest Aβ1-42 levels. Critically, aging effects were significant even when controlling statistically for Aβ1-42. Finally, both autocorrect and Stroop errors were correlated with Aβ1-42, but only autocorrect errors captured unique variance in predicting Aβ1-42 levels. Reading aloud requires simultaneous planning and monitoring of upcoming speech. These results suggest that healthy aging leads to decline in the ability to intermittently monitor for and detect conflict during speech planning and that subtle cognitive changes in preclinical AD magnify this aging deficit. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Tamar H. Gollan
- Department of Psychiatry, University of California, San Diego
| | - Denis S. Smirnov
- Department of Neurosciences, University of California, San Diego
| | - David P. Salmon
- Department of Neurosciences, University of California, San Diego
| | - Douglas Galasko
- Department of Neurosciences, University of California, San Diego
| |
Collapse
|
30
|
Adaptation to pitch-altered feedback is independent of one's own voice pitch sensitivity. Sci Rep 2020; 10:16860. [PMID: 33033324 PMCID: PMC7544828 DOI: 10.1038/s41598-020-73932-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Accepted: 09/23/2020] [Indexed: 01/17/2023] Open
Abstract
Monitoring voice pitch is a fine-tuned process in daily conversations as conveying accurately the linguistic and affective cues in a given utterance depends on the precise control of phonation and intonation. This monitoring is thought to depend on whether the error is treated as self-generated or externally-generated, resulting in either a correction or inflation of errors. The present study reports on two separate paradigms of adaptation to altered feedback to explore whether participants could behave in a more cohesive manner once the error is of comparable size perceptually. The vocal behavior of normal-hearing and fluent speakers was recorded in response to a personalized size of pitch shift versus a non-specific size, one semitone. The personalized size of shift was determined based on the just-noticeable difference in fundamental frequency (F0) of each participant’s voice. Here we show that both tasks successfully demonstrated opposing responses to a constant and predictable F0 perturbation (on from the production onset) but these effects barely carried over once the feedback was back to normal, depicting a pattern that bears some resemblance to compensatory responses. Experiencing a F0 shift that is perceived as self-generated (because it was precisely just-noticeable) is not enough to force speakers to behave more consistently and more homogeneously in an opposing manner. On the contrary, our results suggest that the type of the response as well as the magnitude of the response do not depend in any trivial way on the sensitivity of participants to their own voice pitch. Based on this finding, we speculate that error correction could possibly occur even with a bionic ear, typically even when F0 cues are too subtle for cochlear implant users to detect accurately.
Collapse
|
31
|
Wagner-Altendorf TA, Gottschlich C, Robert C, Cirkel A, Heldmann M, Münte TF. The Suppression of Taboo Word Spoonerisms Is Associated With Altered Medial Frontal Negativity: An ERP Study. Front Hum Neurosci 2020; 14:368. [PMID: 33088266 PMCID: PMC7498727 DOI: 10.3389/fnhum.2020.00368] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 08/11/2020] [Indexed: 12/02/2022] Open
Abstract
The constant internal monitoring of speech is a crucial feature to ensure the fairly error-free process of speech production. It has been argued that internal speech monitoring takes place through detection of conflict between different response options or “speech plans.” Speech errors are thought to occur because two (or more) competing speech plans become activated, and the speaker is unable to inhibit the erroneous plan(s) prior to vocalization. A prime example for a speech plan that has to be suppressed is the involuntary utterance of a taboo word. The present study seeks to examine the suppression of involuntary taboo word utterances. We used the “Spoonerisms of Laboratory Induced Predisposition” (SLIP) paradigm to elicit two competing speech plans, one being correct and one embodying either a taboo word or a non-taboo word spoonerism. Behavioral data showed that inadequate speech plans generally were effectively suppressed, although more effectively in the taboo word spoonerism condition. Event-related potential (ERP) analysis revealed a broad medial frontal negativity (MFN) after the target word pair presentation, interpreted as reflecting conflict detection and resolution to suppress the inadequate speech plan. The MFN was found to be more pronounced in the taboo word spoonerism compared to the neutral word spoonerism condition, indicative of a higher level of conflict when subjects suppressed the involuntary utterance of taboo words.
Collapse
Affiliation(s)
| | | | - Carina Robert
- Department of Neurology, University of Lübeck, Lübeck, Germany.,Institute of Psychology II, University of Lübeck, Lübeck, Germany
| | - Anna Cirkel
- Department of Neurology, University of Lübeck, Lübeck, Germany
| | - Marcus Heldmann
- Department of Neurology, University of Lübeck, Lübeck, Germany.,Institute of Psychology II, University of Lübeck, Lübeck, Germany
| | - Thomas F Münte
- Department of Neurology, University of Lübeck, Lübeck, Germany.,Institute of Psychology II, University of Lübeck, Lübeck, Germany
| |
Collapse
|
32
|
Abstract
As all human activities, verbal communication is fraught with errors. It is estimated that humans produce around 16,000 words per day, but the word that is selected for production is not always correct and neither is the articulation always flawless. However, to facilitate communication, it is important to limit the number of errors. This is accomplished via the verbal monitoring mechanism. A body of research over the last century has uncovered a number of properties of the mechanisms at work during verbal monitoring. Over a dozen routes for verbal monitoring have been postulated. However, to date a complete account of verbal monitoring does not exist. In the current paper we first outline the properties of verbal monitoring that have been empirically demonstrated. This is followed by a discussion of current verbal monitoring models: the perceptual loop theory, conflict monitoring, the hierarchical state feedback control model, and the forward model theory. Each of these models is evaluated given empirical findings and theoretical considerations. We then outline lacunae of current theories, which we address with a proposal for a new model of verbal monitoring for production and perception, based on conflict monitoring models. Additionally, this novel model suggests a mechanism of how a detected error leads to a correction. The error resolution mechanism proposed in our new model is then tested in a computational model. Finally, we outline the advances and predictions of the model.
Collapse
|
33
|
Abstract
Speakers occasionally make speech errors, which may be detected and corrected. According to the comprehension-based account proposed by Levelt, Roelofs, and Meyer (1999) and Roelofs (2004), speakers detect errors by using their speech comprehension system for the monitoring of overt as well as inner speech. According to the production-based account of Nozari, Dell, and Schwartz (2011), speakers may use their comprehension system for external monitoring but error detection in internal monitoring is based on the amount of conflict within the speech production system, assessed by the anterior cingulate cortex (ACC). Here, I address three main arguments of Nozari et al. and Nozari and Novick (2017) against a comprehension-based account of internal monitoring, which concern cross-talk interference between inner and overt speech, a double dissociation between comprehension and self-monitoring ability in patients with aphasia, and a domain-general error-related negativity in the ACC that is allegedly independent of conscious awareness. I argue that none of the arguments are conclusive, and conclude that comprehension-based monitoring remains a viable account of self-monitoring in speaking.
Collapse
|
34
|
Roelofs A. On (Correctly Representing) Comprehension-Based Monitoring in Speaking: Rejoinder to Nozari (2020). J Cogn 2020; 3:20. [PMID: 32944683 PMCID: PMC7473236 DOI: 10.5334/joc.112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 07/02/2020] [Indexed: 12/04/2022] Open
Abstract
Misunderstanding exists about what constitutes comprehension-based monitoring in speaking and what it empirically implies. Here, I make clear that the use of the speech comprehension system is the defining property of comprehension-based monitoring rather than conscious and deliberate processing, as maintained by Nozari (2020). Therefore, contrary to what Nozari claims, my arguments in Roelofs (2020) are suitable for addressing her criticisms raised against comprehension-based monitoring. Also, I indicate that Nozari does not correctly describe my view in a review of her paper. Finally, I further clarify what comprehension-based monitoring entails empirically, thereby dealing with Nozari's new criticisms and inaccurate descriptions of empirical findings. I conclude that comprehension-based monitoring remains a viable account of self-monitoring in speaking.
Collapse
Affiliation(s)
- Ardi Roelofs
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognition, Nijmegen, NL
| |
Collapse
|
35
|
Nozari N. A Comprehension- or a Production-Based Monitor? Response to Roelofs (2020). J Cogn 2020; 3:19. [PMID: 32944682 PMCID: PMC7473204 DOI: 10.5334/joc.102] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 04/16/2020] [Indexed: 11/20/2022] Open
Abstract
Roelofs (2020) has put forth a rebuttal of the criticisms raised against comprehension-based monitoring and has also raised a number of objections against production-based monitors. In this response, I clarify that the model defended by Roelofs is not a comprehension-based monitor, but belongs to a class of monitoring models which I refer to as production-perception models. I review comprehension-based and production-perception models, highlight the strength of each, and point out the differences between them. I then discuss the limitations of both for monitoring production at higher levels, which has been the motivation for production-based monitors. Next, I address the specific criticisms raised by Roelofs (2020) in light of the current evidence. I end by presenting several lines of arguments that preclude a single monitoring mechanism as meeting all the demands of monitoring in a task as complex as communication. A more fruitful avenue is perhaps to focus on what theories are compatible with the nature of representations at specific levels of the production system and with specific aims of monitoring in language production.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Department of Psychology, Carnegie Mellon University, US
- Center for Neural Basis Cognition (CNBC), US
| |
Collapse
|
36
|
Lind A, Hartsuiker RJ. Self-Monitoring in Speech Production: Comprehending the Conflict Between Conflict- and Comprehension-Based Accounts. J Cogn 2020; 3:16. [PMID: 32944679 PMCID: PMC7473181 DOI: 10.5334/joc.118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 07/30/2020] [Indexed: 11/20/2022] Open
Affiliation(s)
- Andreas Lind
- Lund University Cognitive Science, Lund University, SE
| | | |
Collapse
|
37
|
Mandal AS, Fama ME, Skipper-Kallal LM, DeMarco AT, Lacey EH, Turkeltaub PE. Brain structures and cognitive abilities important for the self-monitoring of speech errors. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:319-338. [PMID: 34676371 PMCID: PMC8528269 DOI: 10.1162/nol_a_00015] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 05/20/2020] [Indexed: 06/13/2023]
Abstract
The brain structures and cognitive abilities necessary for successful monitoring of one's own speech errors remain unknown. We aimed to inform self-monitoring models by examining the neural and behavioral correlates of phonological and semantic error detection in individuals with post-stroke aphasia. First, we determined whether detection related to other abilities proposed to contribute to monitoring according to various theories, including naming ability, fluency, word-level auditory comprehension, sentence-level auditory comprehension, and executive function. Regression analyses revealed that fluency and executive scores were independent predictors of phonological error detection, while a measure of word-level comprehension related to semantic error detection. Next, we used multivariate lesion-symptom mapping to determine lesion locations associated with reduced error detection. Reduced overall error detection related to damage to a region of frontal white matter extending into dorsolateral prefrontal cortex (DLPFC). Detection of phonological errors related to damage to the same areas, but the lesion-behavior association was stronger, suggesting the localization for overall error detection was driven primarily by phonological error detection. These findings demonstrate that monitoring of different error types relies on distinct cognitive functions, and provide causal evidence for the importance of frontal white matter tracts and DLPFC for self-monitoring of speech.
Collapse
Affiliation(s)
- Ayan S. Mandal
- University of Cambridge, Department of Psychiatry, Cambridge, UK
- Georgetown University Medical Center, Center for Brain Plasticity and Recovery and Department of Neurology, Washington, DC
| | - Mackenzie E. Fama
- Georgetown University Medical Center, Center for Brain Plasticity and Recovery and Department of Neurology, Washington, DC
- Towson University, Department of Audiology, Speech-Language Pathology, and Deaf Studies, Towson, MD
| | - Laura M. Skipper-Kallal
- Georgetown University Medical Center, Center for Brain Plasticity and Recovery and Department of Neurology, Washington, DC
| | - Andrew T. DeMarco
- Georgetown University Medical Center, Center for Brain Plasticity and Recovery and Department of Neurology, Washington, DC
| | - Elizabeth H. Lacey
- Georgetown University Medical Center, Center for Brain Plasticity and Recovery and Department of Neurology, Washington, DC
- MedStar National Rehabilitation Hospital, Research Division, Washington, DC
| | | |
Collapse
|
38
|
Neocortical activity tracks the hierarchical linguistic structures of self-produced speech during reading aloud. Neuroimage 2020; 216:116788. [DOI: 10.1016/j.neuroimage.2020.116788] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 02/19/2020] [Accepted: 03/20/2020] [Indexed: 11/19/2022] Open
|
39
|
Riès SK, Nadalet L, Mickelsen S, Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Pre-output Language Monitoring in Sign Production. J Cogn Neurosci 2020; 32:1079-1091. [PMID: 32027582 PMCID: PMC7234262 DOI: 10.1162/jocn_a_01542] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture-word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Karen Emmorey
- San Diego State University
- University of California, San Diego
| |
Collapse
|
40
|
Nguyen TQ, Pickren SE, Saha NM, Cutting LE. Executive functions and components of oral reading fluency through the lens of text complexity. READING AND WRITING 2020; 33:1037-1073. [PMID: 32831478 PMCID: PMC7437995 DOI: 10.1007/s11145-020-10020-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As readers struggle to coordinate various reading- and language-related skills during oral reading fluency (ORF), miscues can emerge, especially when processing complex texts. Following a miscue, students often self-correct as a strategy to potentially restore ORF and online linguistic comprehension. Executive functions (EF) are hypothesized to play an interactive role during ORF. Yet, the role of EF in self-corrections while reading complex texts remains elusive. To this end, we evaluated the relation between students' probability of self-correcting miscues-or P(SC)-and their EF profile in a cohort of 143 participants (aged 9-15) who represented a diverse spectrum of reading abilities. Moreover, we used experimentally-manipulated passages (decoding, vocabulary, syntax, and cohesion) and employed a fully cross-classified mixed-effects multilevel regression strategy to evaluate the interplay between components of ORF, EF, and text complexity. Our results revealed that, after controlling for reading and language abilities, increased production of miscues across different passage conditions was explained by worse EF. We also found that students with better EF exhibited greater P(SC) when reading complex texts. While text complexity taxes students' EF and influences their production of miscues, findings suggest that EF may be interactively recruited to restore ORF via self-correcting oral reading errors. Overall, our results suggest that domain-general processes (e.g., EF) are associated with production of miscues and may underlie students' behavior of self-corrections, especially when reading complex texts. Further understanding of the relation between different components of ORF and cognitive processes may inform intervention strategies to improve reading proficiency and overall academic performance.
Collapse
Affiliation(s)
- Tin Q. Nguyen
- Vanderbilt University, 1400 18th Avenue South, Nashville, TN 37203, USA
| | - Sage E. Pickren
- Vanderbilt University, 1400 18th Avenue South, Nashville, TN 37203, USA
| | - Neena M. Saha
- Vanderbilt University, 1400 18th Avenue South, Nashville, TN 37203, USA
| | - Laurie E. Cutting
- Vanderbilt University, 1400 18th Avenue South, Nashville, TN 37203, USA
| |
Collapse
|
41
|
Fama ME, Turkeltaub PE. Inner Speech in Aphasia: Current Evidence, Clinical Implications, and Future Directions. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2020; 29:560-573. [PMID: 31518502 PMCID: PMC7233112 DOI: 10.1044/2019_ajslp-cac48-18-0212] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 01/29/2019] [Accepted: 05/01/2019] [Indexed: 06/10/2023]
Abstract
Purpose Typical language users can engage in a lively internal monologue for introspection and task performance, but what is the nature of inner speech among individuals with aphasia? Studying the phenomenon of inner speech in this population has the potential to further our understanding of inner speech more generally, help clarify the subjective experience of those with aphasia, and inform clinical practice. In this scoping review, we describe and synthesize the existing literature on inner speech in aphasia. Method Studies examining inner speech in aphasia were located through electronic databases and citation searches. Across the various studies, methods include both subjective approaches (i.e., asking individuals with aphasia about the integrity of their inner speech) and objective approaches (i.e., administering objective language tests as proxy measures for inner speech ability). The findings of relevant studies are summarized. Results Although definitions of inner speech vary across research groups, studies using both subjective and objective methods have established findings showing that inner speech can be preserved relative to spoken language in individuals with aphasia, particularly among those with relatively intact word retrieval and difficulty primarily at the level of speech output processing. Approaches that combine self-report with objective measures have demonstrated that individuals with aphasia are, on the whole, reliably able to report the integrity of their inner speech. Conclusions The examination of inner speech in individuals with aphasia has potential implications for clinical practice, in that differences in the preservation of inner speech across individuals may help guide clinical decision making around aphasia treatment. Although there are many questions that remain open to further investigation, studying inner speech in this specific population has also contributed to a broader understanding of the mechanisms of inner speech more generally.
Collapse
Affiliation(s)
- Mackenzie E. Fama
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, DC
| | - Peter E. Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, DC
- Research Division, MedStar National Rehabilitation Network, Washington, DC
| |
Collapse
|
42
|
Gollan TH, Li C, Stasenko A, Salmon DP. Intact reversed language-dominance but exaggerated cognate effects in reading aloud of language switches in bilingual Alzheimer's disease. Neuropsychology 2020; 34:88-106. [PMID: 31545627 DOI: 10.1037/neu0000592] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE The current study investigated how Alzheimer's disease (AD) affects production of speech errors in reading-aloud of mixed-language passages with language switches on cognates (e.g., family/familia), noncognates (e.g., people/gente), and function words (the/la). METHOD Twelve Spanish-English bilinguals with AD and 22 controls read-aloud 8 paragraphs in 4 conditions: (a) English-default content switches, (b) English-default function switches, (c) Spanish-default content switches, and (d) Spanish-default function switches. RESULTS Reading elicited language intrusions (e.g., saying la instead of the), and several types of within-language errors (e.g., saying their instead of the). Reversed language-dominance effects were intact in AD; both patients and controls produced many intrusions on dominant language targets, and relatively fewer intrusions on nondominant language targets. The opposite held for within-language errors, which were more common with nondominant than dominant targets. Patients produced the most intrusion errors with cognate switch words (which best distinguished patients from controls in ROC curves of all speech error types), while controls had equal difficulty switching on cognate and function word targets. CONCLUSIONS Reversed language-dominance effects appear to illustrate automatic inhibitory control over the dominant language, but could instead reflect limited resources available for monitoring when completing a task in the nondominant language. The greater sensitivity of intrusion errors with cognate than with function word targets for distinguishing patients from controls implies that language control may be aided by relatively intact knowledge of grammatical constraints over code-switching in bilinguals with AD. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Collapse
|
43
|
Pinet S, Nozari N. Electrophysiological Correlates of Monitoring in Typing with and without Visual Feedback. J Cogn Neurosci 2019; 32:603-620. [PMID: 31702430 DOI: 10.1162/jocn_a_01500] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
New theories of monitoring in language production, regardless of their mechanistic differences, all posit monitoring mechanisms that share general computational principles with action monitoring. This perspective, if accurate, would predict that many electrophysiological signatures of performance monitoring should be recoverable from language production tasks. In this study, we examined both error-related and feedback-related EEG indices of performance monitoring in the context of a typing-to-dictation task. To disentangle the contribution of the external from internal monitoring processes, we created a condition where participants immediately saw the word they typed (the immediate-feedback condition) versus one in which displaying the word was delayed until the end of the trial (the delayed-feedback condition). The removal of immediate visual feedback prompted a stronger reliance on internal monitoring processes, which resulted in lower correction rates and a clear error-related negativity. Compatible with domain-general monitoring views, an error positivity was only recovered under conditions where errors were detected or had a high likelihood of being detected. Examination of the feedback-related indices (feedback-related negativity and frontocentral positivity) revealed a two-stage process of integration of internal and external information. The recovery of a full range of well-established EEG indices of action monitoring in a language production task strongly endorses domain-general views of monitoring. Such indices, in turn, are helpful in understanding how information from different monitoring channels are combined.
Collapse
|
44
|
Sasisekaran J, Weathers EJ. Disfluencies and phonological revisions in a nonword repetition task in school-age children who stutter. JOURNAL OF COMMUNICATION DISORDERS 2019; 81:105917. [PMID: 31247507 DOI: 10.1016/j.jcomdis.2019.105917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 06/04/2019] [Accepted: 06/09/2019] [Indexed: 06/09/2023]
Abstract
Phonological encoding and associated functions, including monitoring of covert and overt speech, have been attributed relevant roles in stuttering. The aim of this study was to investigate these processes by testing the effects of nonword length in syllables (3-, 4-, 6-syllable), phonotactics, and phonemic/phonetic complexity on disfluencies and phonological revisions in 26 school-age children who stutter (CWS, n = 13) and matched fluent controls (CWNS). Participants repeated nonwords in two sessions separated by an hour. Within-group comparisons of percentage disfluencies using nonparametric tests resulted in significantly more disfluencies for the 6- compared to the 3-syllable nonwords and suggested that nonword length influences disfluencies in the CWS. The groups were comparable in the percentage of disfluencies at all levels of nonword length. The findings failed to provide conclusive evidence that phonological complexity and phonotactic manipulations have a greater effect on disfluencies in CWS compared to CWNS. The findings of significantly fewer phonological revisions and the lack of a significant correlation between disfluencies and revisions in the CWS in Session 1 compared to the CWNS are interpreted to suggest reduced external auditory monitoring. Demands on incremental phonological encoding with increasing task complexity (the Covert Repair Hypothesis, Postma & Kolk, 1993) and reduced external auditory monitoring of stuttered speech can account for the disfluencies, speech errors, and revisions in the speech of school-age CWS.
Collapse
Affiliation(s)
- Jayanthi Sasisekaran
- Department of Speech Language Hearing Sciences, University of Minnesota, United States.
| | - Erin J Weathers
- Department of Communication Sciences and Disorders, University of Iowa, United States
| |
Collapse
|
45
|
Roll LC, Siu OL, Li SYW, De Witte H. Human Error: The Impact of Job Insecurity on Attention-Related Cognitive Errors and Error Detection. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16132427. [PMID: 31288465 PMCID: PMC6651186 DOI: 10.3390/ijerph16132427] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 06/18/2019] [Accepted: 06/25/2019] [Indexed: 11/16/2022]
Abstract
(1) Background: Work-related stress is a major contributor to human error. One significant workplace stressor is job insecurity, which has been linked to an increased likelihood of experiencing burnout. This, in turn, might affect human error, specifically attention-related cognitive errors (ARCES) and the ability to detect errors. ARCES can be costly for organizations and pose a safety risk. Equally detrimental effects can be caused by failure to detect errors before they can cause harm. (2) Methods: We gathered self-report and behavioral data from 148 employees working in educational, financial and medical sectors in China. We designed and piloted an error detection task in which employees had to compare fictitious customer orders to deliveries of an online shop. We tested for indirect effects using the PROCESS macro with bootstrapping (3) Results: Our findings confirmed indirect effects of job insecurity on both ARCES and the ability to detect errors via burnout. (4) Conclusions: The present research shows that job insecurity influences making and detecting errors through its relationship with burnout. These findings suggest that job insecurity could increase the likelihood for human error with potential implications for employees' safety and the safety of others.
Collapse
Affiliation(s)
- Lara Christina Roll
- Department of Applied Psychology, Lingnan University, Hong Kong, China.
- Optentia Research Focus Area, North-West University, Vanderbijlpark 1900, South Africa.
| | - Oi-Ling Siu
- Department of Applied Psychology, Lingnan University, Hong Kong, China
| | - Simon Y W Li
- Department of Applied Psychology, Lingnan University, Hong Kong, China
| | - Hans De Witte
- Optentia Research Focus Area, North-West University, Vanderbijlpark 1900, South Africa
- Work, Organisational, and Personnel Psychology Research Group, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
46
|
Abstract
In [Nozari, N., & Hepner, C. R. (2018). To select or to wait? The importance of criterion setting in debates of competitive lexical selection. Cognitive Neuropsychology. Advance online publication. doi:10.1080/02643294.2018.1476335], we proposed a theoretical framework for reconciling two seemingly irreconcilable theories of lexical selection: competitive vs. non-competitive selection. The key point in this framework is the division of language production into two separate-albeit interacting-systems: a decision-making framework and a multi-layered system which maps meaning to sound. Technically, this can be accomplished by superimposing a signal detection model onto the distributions of conflict derived from the core dynamics of mapping semantic features to lexical representations. Based on this framework, we argued that a flexible selection criterion could accommodate patterns predicted by both competitive and non-competitive models of lexical selection. Five excellent commentaries posed various questions regarding the necessity, applicability, and scope of the proposed framework. This paper addresses those questions.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Department of Neurology, Johns Hopkins University, Baltimore, MD, USA.,Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | | |
Collapse
|
47
|
Key-DeLyria SE, Bodner T, Altmann LJP. Rapid Serial Visual Presentation Interacts with Ambiguity During Sentence Comprehension. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2019; 48:665-682. [PMID: 30612265 DOI: 10.1007/s10936-018-09624-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Conventional opinion about using Rapid Serial Visual Presentation (RSVP) for examining sentence comprehension maintains that RSVP taxes working memory (WM), which probably affects sentence processing. However, most RSVP studies only infer the involvement of WM. Other cognitive resources, such as cognitive control or vocabulary may also impact sentence comprehension and interact with RSVP. Further, sentence ambiguity is predicted to interact with RSVP and cognitive resources to impact sentence comprehension. To test these relationships, participants read ambiguous and unambiguous sentences using RSVP and Whole-Sentence presentation, followed by comprehension questions that were targeted to the ambiguous region of the sentences. Presentation type and ambiguity interacted to affect RT such that the effect of RSVP was exaggerated for ambiguous sentences. RT effects were moderated by WM and vocabulary. WM and cognitive control affected accuracy. Findings are discussed in light of depth of processing and the impact of cognitive resources on sentence comprehension.
Collapse
Affiliation(s)
- Sarah E Key-DeLyria
- Speech and Hearing Sciences Department, Portland State University, P.O. Box 751, Portland, OR, 97207-0751, USA.
| | - Todd Bodner
- Department of Psychology, Portland State University, Portland, OR, USA
| | - Lori J P Altmann
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
| |
Collapse
|
48
|
Abstract
The current study investigated the contribution of phonology to bilingual language control in connected speech. Speech production was elicited by asking Mandarin-English bilinguals to read aloud paragraphs either in Chinese or English, while six words were switched to the other language in each paragraph. The switch words were either cognates or noncognates, and switching difficulty was measured by production of cross-language intrusion errors on the switch words (e.g., mistakenly saying (qiao3-ke4-li4) instead of chocolate). All the bilinguals were Mandarin-dominant, but produced more intrusion errors when target words were written in Chinese than when written in English (i.e., they exhibited robust reversed dominance effects). Most critically, bilinguals produced significantly more intrusions on Chinese cognates, but also detected and self-corrected these same errors more quickly than with noncognates. Phonological overlap boosts dual-language activation thus leading to greater competition between languages, and increased response conflict, thereby increasing production of intrusions but also facilitating error detection during speech monitoring.
Collapse
|
49
|
The impact of spelling regularity on handwriting production: A coupled fMRI and kinematics study. Cortex 2019; 113:111-127. [DOI: 10.1016/j.cortex.2018.11.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Revised: 07/13/2018] [Accepted: 11/27/2018] [Indexed: 11/17/2022]
|
50
|
Bourguignon NJ, Gracco VL. A dual architecture for the cognitive control of language: Evidence from functional imaging and language production. Neuroimage 2019; 192:26-37. [PMID: 30831311 DOI: 10.1016/j.neuroimage.2019.02.043] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 02/13/2019] [Accepted: 02/18/2019] [Indexed: 12/28/2022] Open
Abstract
The relation between language processing and the cognitive control of thought and action is a widely debated issue in cognitive neuroscience. While recent research suggests a modular separation between a 'language system' for meaningful linguistic processing and a 'multiple-demand system' for cognitive control, other findings point to more integrated perspectives in which controlled language processing emerges from a division of labor between (parts of) the language system and (parts of) the multiple-demand system. We test here a dual approach to the cognitive control of language predicated on the notion of cognitive control as the combined contribution of a semantic control network (SCN) and a working memory network (WMN) supporting top-down manipulation of (lexico-)semantic information and the monitoring of information in verbal working memory, respectively. We reveal these networks in a large-scale coordinate-based meta-analysis contrasting functional imaging studies of verbal working memory vs. active judgments on (lexico-)semantic information and show the extent of their overlap with the multiple-demand system and the language system. Testing these networks' involvement in a functional imaging study of object naming and verb generation, we then show that SCN specializes in top-down retrieval and selection of (lexico-)semantic representations amongst competing alternatives, while WMN intervenes at a more general level of control modulated in part by the amount of competing responses available for selection. These results have implications in conceptualizing the neurocognitive architecture of language and cognitive control.
Collapse
Affiliation(s)
- Nicolas J Bourguignon
- Psychological Sciences Department, University of Connecticut, Storrs, USA; Haskins Laboratories, New Haven, CT, USA.
| | - Vincent L Gracco
- Haskins Laboratories, New Haven, CT, USA; School of Communication Sciences and Disorders, McGill University, Montreal, Canada.
| |
Collapse
|