1
|
Baker C, Love T. Modulating Complex Sentence Processing in Aphasia Through Attention and Semantic Networks. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:5011-5035. [PMID: 37934886 PMCID: PMC11001378 DOI: 10.1044/2023_jslhr-23-00298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/05/2023] [Accepted: 09/07/2023] [Indexed: 11/09/2023]
Abstract
PURPOSE Lexical processing impairments such as delayed and reduced activation of lexical-semantic information have been linked to syntactic processing disruptions and sentence comprehension deficits in individuals with aphasia (IWAs). Lexical-level deficits can also preclude successful lexical encoding during sentence processing and amplify the processing costs of similarity-based interference during syntactic retrieval. We investigate whether two manipulations to engage attention and pre-activate semantic features of a target (to-be-retrieved) noun will (a) boost lexical activation during initial lexical encoding and (b) facilitate syntactic dependency linking through improved resolution of interference in IWAs and neurologically unimpaired age-matched controls (AMCs). METHOD Eye-tracking-while-listening with a visual world paradigm was used to investigate whether semantic and attentional manipulations modulated initial lexical processing and downstream syntactic retrieval of the direct-object noun in object-relative sentences. RESULTS In the attention and semantic manipulations, the AMC group showed no changes in initial lexical access levels; however, gaze patterns revealed clear facilitations in dependency linking and interference resolution. In the IWA group, the attentional cue increased and maintained activation of N1 with modest facilitations in dependency linking. In the semantic condition, IWA results showed a greater degree of facilitation during dependency linking. CONCLUSIONS The results suggest that attention and semantic activation are parameters that may be manipulated to strengthen encoding of lexical representations to facilitate retrieval (i.e., dependency linking) and mitigate similarity-based interference. In IWAs, these manipulations may help to reduce lexical processing deficits that can preclude successful encoding.
Collapse
Affiliation(s)
- Carolyn Baker
- SDSU/UCSD Joint Doctoral Program in Language & Communicative Disorders, San Diego, CA
| | - Tracy Love
- SDSU/UCSD Joint Doctoral Program in Language & Communicative Disorders, San Diego, CA
- School of Speech, Language, and Hearing Sciences, San Diego State University, CA
| |
Collapse
|
2
|
Mahowald K, Diachek E, Gibson E, Fedorenko E, Futrell R. Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages. Cognition 2023; 241:105543. [PMID: 37713956 DOI: 10.1016/j.cognition.2023.105543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 06/27/2023] [Accepted: 06/27/2023] [Indexed: 09/17/2023]
Abstract
Grammatical cues are sometimes redundant with word meanings in natural language. For instance, English word order rules constrain the word order of a sentence like "The dog chewed the bone" even though the status of "dog" as subject and "bone" as object can be inferred from world knowledge and plausibility. Quantifying how often this redundancy occurs, and how the level of redundancy varies across typologically diverse languages, can shed light on the function and evolution of grammar. To that end, we performed a behavioral experiment in English and Russian and a cross-linguistic computational analysis measuring the redundancy of grammatical cues in transitive clauses extracted from corpus text. English and Russian speakers (n = 484) were presented with subjects, verbs, and objects (in random order and with morphological markings removed) extracted from naturally occurring sentences and were asked to identify which noun is the subject of the action. Accuracy was high in both languages (∼89% in English, ∼87% in Russian). Next, we trained a neural network machine classifier on a similar task: predicting which nominal in a subject-verb-object triad is the subject. Across 30 languages from eight language families, performance was consistently high: a median accuracy of 87%, comparable to the accuracy observed in the human experiments. The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e.g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e.g., "The bone chewed the dog.").
Collapse
Affiliation(s)
- Kyle Mahowald
- The University of Texas at Austin, Linguistics, USA.
| | | | - Edward Gibson
- Massachusetts Institute of Technology, Brain and Cognitive Sciences, USA
| | - Evelina Fedorenko
- Massachusetts Institute of Technology, Brain and Cognitive Sciences, USA; Massachusetts Institute of Technology, McGovern Institute for Brain Research, USA
| | | |
Collapse
|
3
|
The role of the l-IPS in the comprehension of reversible and irreversible sentences: an rTMS study. Brain Struct Funct 2020; 225:2403-2414. [PMID: 32844277 PMCID: PMC7544754 DOI: 10.1007/s00429-020-02130-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 08/07/2020] [Indexed: 11/20/2022]
Abstract
Thematic roles can be seen as semantic labels assigned to who/what is taking part in the event denoted by a verb. Encoding thematic relations is crucial for sentence interpretation since it relies on both syntactic and semantic aspects. In previous studies, repetitive transcranial magnetic stimulation (rTMS) over the left inferior intraparietal sulcus (l-IPS) selectively influenced performance accuracy on reversible passive (but not active) sentences. The effect was attributed to the fact that in these sentences the assignment of the agent and theme roles requires re-analysis of the first-pass sentence parsing.
To evaluate the role of reversibility and non-canonical word order (passive voice) on the effect, rTMS was applied over l-IPS during a sentence comprehension task that included reversible and irreversible, active and passive sentences. Participants were asked to identify who/what was performing the action or who/what the action was being performed on.
Stimulation of the l-IPS increased response time on reversible passive sentences but not on reversible active sentences. Importantly, no effect was found on irreversible sentences, irrespective of sentence diathesis. Results suggest that neither reversibility nor sentence diathesis alone are responsible for the effect and that the effect is likely to be triggered/constrained by a combination of semantic reversibility and non-canonical word order. Combined with the results of previous studies, and irrespective of the specific role of each feature, these findings support the view that the l-IPS is critically involved in the assignment of thematic roles in reversible sentences.
Collapse
|
4
|
Toward a functional neuroanatomy of semantic aphasia: A history and ten new cases. Cortex 2016; 97:164-182. [PMID: 28277283 DOI: 10.1016/j.cortex.2016.09.012] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 04/17/2016] [Accepted: 09/18/2016] [Indexed: 11/21/2022]
Abstract
Almost 70 years ago, Alexander Luria incorporated semantic aphasia among his aphasia classifications by demonstrating that deficits in linking the logical relationships of words in a sentence could co-occur with non-linguistic disorders of calculation, spatial gnosis and praxis deficits. In line with his comprehensive approach to the assessment of language and other cognitive functions, he argued that deficits in understanding semantically reversible sentences and prepositional phrases, for example, were in line with a single neuropsychological factor of impaired spatial analysis and synthesis, since understanding such grammatical relationships would also draw on their spatial relationships. Critically, Luria demonstrated the neural underpinnings of this syndrome with the critical implication of the cortex of the left temporal-parietal-occipital (TPO) junction. In this study, we report neuropsychological and lesion profiles of 10 new cases of semantic aphasia. Modern neuroimaging techniques provide support for the relevance of the left TPO area for semantic aphasia, but also extend Luria's neuroanatomical model by taking into account white matter pathways. Our findings suggest that tracts with parietal connectivity - the arcuate fasciculus (long and posterior segments), the inferior fronto-occipital fasciculus, the inferior longitudinal fasciculus, the superior longitudinal fasciculus II and III, and the corpus callosum - are implicated in the linguistic and non-linguistic deficits of patients with semantic aphasia.
Collapse
|
5
|
Wang J, Cherkassky VL, Yang Y, Chang KMK, Vargas R, Diana N, Just MA. Identifying thematic roles from neural representations measured by functional magnetic resonance imaging. Cogn Neuropsychol 2016; 33:257-64. [PMID: 27314175 DOI: 10.1080/02643294.2016.1182480] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The generativity and complexity of human thought stem in large part from the ability to represent relations among concepts and form propositions. The current study reveals how a given object such as rabbit is neurally encoded differently and identifiably depending on whether it is an agent ("the rabbit punches the monkey") or a patient ("the monkey punches the rabbit"). Machine-learning classifiers were trained on functional magnetic resonance imaging (fMRI) data evoked by a set of short videos that conveyed agent-verb-patient propositions. When tested on a held-out video, the classifiers were able to reliably identify the thematic role of an object from its associated fMRI activation pattern. Moreover, when trained on one subset of the study participants, classifiers reliably identified the thematic roles in the data of a left-out participant (mean accuracy = .66), indicating that the neural representations of thematic roles were common across individuals.
Collapse
Affiliation(s)
- Jing Wang
- a Center for Cognitive Brain Imaging, Department of Psychology , Carnegie Mellon University , Pittsburgh , USA
| | - Vladimir L Cherkassky
- a Center for Cognitive Brain Imaging, Department of Psychology , Carnegie Mellon University , Pittsburgh , USA
| | - Ying Yang
- a Center for Cognitive Brain Imaging, Department of Psychology , Carnegie Mellon University , Pittsburgh , USA
| | - Kai-Min Kevin Chang
- b Language Technologies Institute, School of Computer Science , Carnegie Mellon University , Pittsburgh , USA
| | - Robert Vargas
- a Center for Cognitive Brain Imaging, Department of Psychology , Carnegie Mellon University , Pittsburgh , USA
| | - Nicholas Diana
- a Center for Cognitive Brain Imaging, Department of Psychology , Carnegie Mellon University , Pittsburgh , USA
| | - Marcel Adam Just
- a Center for Cognitive Brain Imaging, Department of Psychology , Carnegie Mellon University , Pittsburgh , USA
| |
Collapse
|
6
|
Finocchiaro C, Capasso R, Cattaneo L, Zuanazzi A, Miceli G. Thematic role assignment in the posterior parietal cortex: A TMS study. Neuropsychologia 2015; 77:223-32. [PMID: 26318240 DOI: 10.1016/j.neuropsychologia.2015.08.025] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Revised: 08/04/2015] [Accepted: 08/24/2015] [Indexed: 11/26/2022]
Abstract
Verbs denote relations between entities acting a role in an event. Thematic roles are essential to the correct use of verbs and involve both semantic and syntactic aspects. We used repetitive Transcranial Magnetic Stimulation (rTMS) to study the involvement of three different left parietal sites in the understanding of thematic roles. In a sentence-to-picture matching task, twelve participants were asked to judge whether or not a given picture matched with a written sentence. Pictures represented simple reversible actions, and sentences were in the active or passive diathesis. Whereas both active and passive sentences require the correct encoding of thematic roles, passives also imply thematic reanalysis, as the canonical order of thematic roles is systematically reversed. The experiment was divided in three sessions. In each session a different parietal site (anterior, middle, posterior) was stimulated at 5 Hz in an event-related fashion, time-locked to the presentation of visual stimuli. Results showed increased accuracy for passive sentences following posterior parietal stimulation. The effect appeared to be (a) TMS-related, as no effect was observed in a control, no-TMS experiment with eighteen new participants; (b) independent from semantic processes involved in word-picture association, as no TMS-related effects were observed in a picture-word matching task. We interpret the results as showing that the posterior parietal site is specifically involved in the assignment of thematic roles, in particular when the correct interpretation of a sentence requires reanalysis of temporarily encoded thematic roles, as in passive reversible sentences.
Collapse
Affiliation(s)
- Chiara Finocchiaro
- Dipartimento di Psicologia e Scienze Cognitive, Università di Trento, Trento, Italy.
| | | | | | | | - Gabriele Miceli
- Dipartimento di Psicologia e Scienze Cognitive, Università di Trento, Trento, Italy; Center for Mind/Brain Sciences (CIMeC), Trento, Italy
| |
Collapse
|
7
|
Weiss-Croft LJ, Baldeweg T. Maturation of language networks in children: A systematic review of 22years of functional MRI. Neuroimage 2015. [PMID: 26213350 DOI: 10.1016/j.neuroimage.2015.07.046] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Understanding how language networks change during childhood is important for theories of cognitive development and for identifying the neural causes of language impairment. Despite this, there is currently little systematic evidence regarding the typical developmental trajectory for language from the field of neuroimaging. We reviewed functional MRI (fMRI) studies published between 1992 and 2014, and quantified the evidence for age-related changes in localisation and lateralisation of fMRI activation in the language network (excluding the cerebellum and subcortical regions). Although age-related changes differed according to task type and input modality, we identified four consistent findings concerning the typical maturation of the language system. First, activation in core semantic processing regions increases with age. Second, activation in lower-level sensory and motor regions increases with age as activation in higher-level control regions reduces. We suggest that this reflects increased automaticity of language processing as children become more proficient. Third, the posterior cingulate cortex and precuneus (regions associated with the default mode network) show increasing attenuation across childhood and adolescence. Finally, language lateralisation is established by approximately 5years of age. Small increases in leftward lateralisation are observed in frontal regions, but these are tightly linked to performance.
Collapse
Affiliation(s)
- Louise J Weiss-Croft
- Cognitive Neuroscience and Neuropsychiatry Section, Developmental Neurosciences Programme, UCL Institute of Child Health, 30 Guilford Street, London WC1N 1EH, UK.
| | - Torsten Baldeweg
- Cognitive Neuroscience and Neuropsychiatry Section, Developmental Neurosciences Programme, UCL Institute of Child Health, 30 Guilford Street, London WC1N 1EH, UK.
| |
Collapse
|
8
|
Abstract
In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI) data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation of the functional speech connectome. In addition, the observed capacity of the primary sensorimotor cortex to exhibit operational heterogeneity challenged the established concept of unimodality of this region. This study uses graph theory to analyze functional MRI data recorded from speakers as they produce single syllables or whole sentences, revealing the complexity of the brain network machinery that controls speech and language. Speech production is a complex process that requires the orchestration of multiple brain regions. However, our current understanding of the large-scale neural architecture during speaking remains scant, as research has mostly focused on examining distinct brain circuits involved in distinct aspects of speech control. Here, we performed graph theoretical analyses of functional MRI data acquired from healthy subjects in order to reveal how brain regions relate to one another while speaking. We constructed functional brain networks of increasing hierarchy from rest to simple vocal motor output to the production of real-life speech, and compared these to nonspeech control tasks such as finger tapping and pure tone discrimination. We discovered a specialized network of densely connected sensorimotor regions, which formed a common processing core across all conditions. Specifically, the primary sensorimotor cortex participated in multiple functional domains across different networks and modulated long-range connections depending on task content, which challenges the established concept of low-order unimodal function of this region. Compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively formed the functional speech connectome.
Collapse
Affiliation(s)
- Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
9
|
Yokoyama S, Takahashi K, Kawashima R. Animacy or case marker order?: priority information for online sentence comprehension in a head-final language. PLoS One 2014; 9:e93109. [PMID: 24664132 PMCID: PMC3963992 DOI: 10.1371/journal.pone.0093109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Accepted: 03/03/2014] [Indexed: 11/18/2022] Open
Abstract
It is well known that case marker information and animacy information are incrementally used to comprehend sentences in head-final languages. However, it is still unclear how these two kinds of information are processed when they are in competition in a sentence's surface expression. The current study used sentences conveying the potentiality of some event (henceforth, potential sentences) in the Japanese language with theoretically canonical word order (dative-nominative/animate-inanimate order) and with scrambled word order (nominative-dative/inanimate-animate order). In Japanese, nominative-first case order and animate-inanimate animacy order are preferred to their reversed patterns in simplex sentences. Hence, in these potential sentences, case information and animacy information are in competition. The experiment consisted of a self-paced reading task testing two conditions (that is, canonical and scrambled potential sentences). Forty-five native speakers of Japanese participated. In our results, the canonical potential sentences showed a scrambling cost at the second argument position (the nominative argument). This result indicates that the theoretically scrambled case marker order (nominative-dative) is processed as a mentally canonical case marker order, suggesting that case information is used preferentially over animacy information when the two are in competition. The implications of our findings are discussed with regard to incremental simplex sentence comprehension models for head-final languages.
Collapse
Affiliation(s)
- Satoru Yokoyama
- Institute of Development, Aging, and Cancer, Tohoku University, Sendai, Miyagi, Japan
- Graduate School of International Cultural Studies, Tohoku University, Sendai, Miyagi, Japan
- * E-mail:
| | - Kei Takahashi
- Institute of Development, Aging, and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Institute of Development, Aging, and Cancer, Tohoku University, Sendai, Miyagi, Japan
| |
Collapse
|
10
|
Meltzer JA, Wagage S, Ryder J, Solomon B, Braun AR. Adaptive significance of right hemisphere activation in aphasic language comprehension. Neuropsychologia 2013; 51:1248-59. [PMID: 23566891 PMCID: PMC3821997 DOI: 10.1016/j.neuropsychologia.2013.03.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2012] [Revised: 02/25/2013] [Accepted: 03/19/2013] [Indexed: 10/27/2022]
Abstract
Aphasic patients often exhibit increased right hemisphere activity during language tasks. This may represent takeover of function by regions homologous to the left-hemisphere language networks, maladaptive interference, or adaptation of alternate compensatory strategies. To distinguish between these accounts, we tested language comprehension in 25 aphasic patients using an online sentence-picture matching paradigm while measuring brain activation with MEG. Linguistic conditions included semantically irreversible ("The boy is eating the apple") and reversible ("The boy is pushing the girl") sentences at three levels of syntactic complexity. As expected, patients performed well above chance on irreversible sentences, and at chance on reversible sentences of high complexity. Comprehension of reversible non-complex sentences ranged from nearly perfect to chance, and was highly correlated with offline measures of language comprehension. Lesion analysis revealed that comprehension deficits for reversible sentences were predicted by damage to the left temporal lobe. Although aphasic patients activated homologous areas in the right temporal lobe, such activation was not correlated with comprehension performance. Rather, patients with better comprehension exhibited increased activity in dorsal fronto-parietal regions. Correlations between performance and dorsal network activity occurred bilaterally during perception of sentences, and in the right hemisphere during a post-sentence memory delay. These results suggest that effortful reprocessing of perceived sentences in short-term memory can support improved comprehension in aphasia, and that strategic recruitment of alternative networks, rather than homologous takeover, may account for some findings of right hemisphere language activation in aphasia.
Collapse
Affiliation(s)
- Jed A Meltzer
- Rotman Research Institute, Baycrest Centre, 3560 Bathurst Street, Toronto, ON, Canada.
| | | | | | | | | |
Collapse
|
11
|
Abstract
During speech production, auditory processing of self-generated speech is used to adjust subsequent articulations. The current study investigated how the proposed auditory-motor interactions are manifest at the neural level in native and non-native speakers of English who were overtly naming pictures of objects and reading their written names. Data were acquired with functional magnetic resonance imaging and analyzed with dynamic causal modeling. We found that (1) higher activity in articulatory regions caused activity in auditory regions to decrease (i.e., auditory suppression), and (2) higher activity in auditory regions caused activity in articulatory regions to increase (i.e., auditory feedback). In addition, we were able to demonstrate that (3) speaking in a non-native language involves more auditory feedback and less auditory suppression than speaking in a native language. The difference between native and non-native speakers was further supported by finding that, within non-native speakers, there was less auditory feedback for those with better verbal fluency. Consequently, the networks of more fluent non-native speakers looked more like those of native speakers. Together, these findings provide a foundation on which to explore auditory-motor interactions during speech production in other human populations, particularly those with speech difficulties.
Collapse
|
12
|
Use of semantic information to interpret thematic information for real-time sentence comprehension in an SOV language. PLoS One 2013; 8:e56106. [PMID: 23409134 PMCID: PMC3568076 DOI: 10.1371/journal.pone.0056106] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2012] [Accepted: 01/04/2013] [Indexed: 11/19/2022] Open
Abstract
Recently, sentence comprehension in languages other than European languages has been investigated from a cross-linguistic perspective. In this paper, we examine whether and how animacy-related semantic information is used for real-time sentence comprehension in a SOV word order language (i.e., Japanese). Twenty-three Japanese native speakers participated in this study. They read semantically reversible and non-reversible sentences with canonical word order, and those with scrambled word order. In our results, the second argument position in reversible sentences took longer to read than that in non-reversible sentences, indicating that animacy information is used in second argument processing. In contrast, for the predicate position, there was no difference in reading times, suggesting that animacy information is NOT used in the predicate position. These results are discussed using the sentence comprehension models of an SOV word order language.
Collapse
|
13
|
Herdener M, Humbel T, Esposito F, Habermeyer B, Cattapan-Ludewig K, Seifritz E. Jazz Drummers Recruit Language-Specific Areas for the Processing of Rhythmic Structure. Cereb Cortex 2012. [DOI: 10.1093/cercor/bhs367] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
14
|
Reading without the left ventral occipito-temporal cortex. Neuropsychologia 2012; 50:3621-35. [PMID: 23017598 PMCID: PMC3524457 DOI: 10.1016/j.neuropsychologia.2012.09.030] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2012] [Revised: 07/27/2012] [Accepted: 09/17/2012] [Indexed: 11/23/2022]
Abstract
The left ventral occipito-temporal cortex (LvOT) is thought to be essential for the rapid parallel letter processing that is required for skilled reading. Here we investigate whether rapid written word identification in skilled readers can be supported by neural pathways that do not involve LvOT. Hypotheses were derived from a stroke patient who acquired dyslexia following extensive LvOT damage. The patient followed a reading trajectory typical of that associated with pure alexia, re-gaining the ability to read aloud many words with declining performance as the length of words increased. Using functional MRI and dynamic causal modelling (DCM), we found that, when short (three to five letter) familiar words were read successfully, visual inputs to the patient’s occipital cortex were connected to left motor and premotor regions via activity in a central part of the left superior temporal sulcus (STS). The patient analysis therefore implied a left hemisphere “reading-without-LvOT” pathway that involved STS. We then investigated whether the same reading-without-LvOT pathway could be identified in 29 skilled readers and whether there was inter-subject variability in the degree to which skilled reading engaged LvOT. We found that functional connectivity in the reading-without-LvOT pathway was strongest in individuals who had the weakest functional connectivity in the LvOT pathway. This observation validates the findings of our patient’s case study. Our findings highlight the contribution of a left hemisphere reading pathway that is activated during the rapid identification of short familiar written words, particularly when LvOT is not involved. Preservation and use of this pathway may explain how patients are still able to read short words accurately when LvOT has been damaged.
Collapse
|
15
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1272] [Impact Index Per Article: 106.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
16
|
Schafer RJ, Page KA, Arora J, Sherwin R, Constable RT. BOLD response to semantic and syntactic processing during hypoglycemia is load-dependent. BRAIN AND LANGUAGE 2012; 120:1-14. [PMID: 22000597 DOI: 10.1016/j.bandl.2011.07.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2009] [Revised: 06/10/2011] [Accepted: 07/22/2011] [Indexed: 05/31/2023]
Abstract
This study investigates how syntactic and semantic load factors impact sentence comprehension and BOLD signal under moderate hypoglycemia. A dual session, whole brain fMRI study was conducted on 16 healthy participants using the glucose clamp technique. In one session, they experienced insulin-induced hypoglycemia (plasma glucose at ∼50mg/dL); in the other, plasma glucose was maintained at euglycemic levels (∼100mg/dL). During scans subjects were presented with sentences of contrasting syntactic (embedding vs. conjunction) and semantic (reversibility vs. irreversibility) load. Semantic factors dominated the overall load effects on both performance (p<0.001) and BOLD response (p<0.01, corrected). Differential BOLD signal was observed in frontal, temporal, temporo-parietal and medio-temporal regions. Hypoglycemia and syntactic factors significantly impacted performance (p=0.002) and BOLD response (p<0.01, corrected) in the reversible clause conditions, more extensively in reversible-embedded than in reversible-conjoined clauses. Hypoglycemia resulted in a robust decrease in performance on reversible clauses and exerted attenuating effects on BOLD unselectively across cortical circuits. The dominance of reversibility in all measures underscores the distinction between the syntactic and semantic contrasts. The syntactic is based in a quantitative difference in algorithms interpreting embedded and conjoined structures. We suggest that the semantic is based in a qualitative difference between algorithmic mapping of arguments in reversible clauses and heuristic linking in irreversible clauses. Because heuristics drastically reduce resource demand, the operations they support would resist the load-dependent cognitive consequences of hypoglycemia.
Collapse
Affiliation(s)
- Robin J Schafer
- American Association for the Advancement of Science, Washington, DC, United States.
| | | | | | | | | |
Collapse
|
17
|
BLISS: an Artificial Language for Learnability Studies. Cognit Comput 2011. [DOI: 10.1007/s12559-011-9113-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
18
|
Abstract
Contemporary models of the neural system that supports reading propose that activity in a ventral occipitotemporal area (vOT) drives activity in higher-order language areas, for example, those in the posterior superior temporal sulcus (pSTS) and anterior superior temporal sulcus (aSTS). We used fMRI with dynamic causal modeling (DCM) to investigate evidence for other routes from visual cortex to the left temporal lobe language areas. First we identified activations in posterior inferior occipital (iO) and vOT areas that were more activated for silent reading than listening to words and sentences; and in pSTS and aSTS areas that were commonly activated for reading relative to false-fonts and listening to words relative to reversed words. Second, in three different DCM analyses, we tested whether visual processing of words modulates activity from the following: (1) iO→vOT, iO→pSTS, both, or neither; (2) vOT→pSTS, iO→pSTS, both or neither; and (3) pSTS→aSTS, vOT→aSTS, both, or neither. We found that reading words increased connectivity (1) from iO to both pSTS and vOT; (2) to pSTS from both iO and vOT; and (3) to aSTS from both vOT and pSTS. These results highlight three potential processing streams in the occipitotemporal cortex: iO→pSTS→aSTS; iO→vOT→aSTS; and iO→vOT→pSTS→aSTS. We discuss these results in terms of cognitive models of reading and propose that efficient reading relies on the integrity of all these pathways.
Collapse
|
19
|
Meltzer JA, McArdle JJ, Schafer RJ, Braun AR. Neural aspects of sentence comprehension: syntactic complexity, reversibility, and reanalysis. ACTA ACUST UNITED AC 2009; 20:1853-64. [PMID: 19920058 PMCID: PMC2901020 DOI: 10.1093/cercor/bhp249] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Broca's area is preferentially activated by reversible sentences with complex syntax, but various linguistic factors may be responsible for this finding, including syntactic movement, working-memory demands, and post hoc reanalysis. To distinguish between these, we tested the interaction of syntactic complexity and semantic reversibility in a functional magnetic resonance imaging study of sentence–picture matching. During auditory comprehension, semantic reversibility induced selective activation throughout the left perisylvian language network. In contrast, syntactic complexity (object-embedded vs. subject-embedded relative clauses) within reversible sentences engaged only the left inferior frontal gyrus (LIFG) and left precentral gyrus. Within irreversible sentences, only the LIFG was sensitive to syntactic complexity, confirming a unique role for this region in syntactic processing. Nonetheless, larger effects of reversibility itself occurred in the same regions, suggesting that full syntactic parsing may be a nonautomatic process applied as needed. Complex reversible sentences also induced enhanced signals in LIFG and left precentral regions on subsequent picture selection, but with additional recruitment of the right hemisphere homolog area (right inferior frontal gyrus) as well, suggesting that post hoc reanalysis of sentence structure, compared with initial comprehension, engages an overlapping but larger network of brain regions. These dissociable effects may offer a basis for studying the reorganization of receptive language function after brain damage.
Collapse
Affiliation(s)
- Jed A Meltzer
- Language Section, Voice, Speech, and Language Branch, National Institute on Deafness and Other Communication Disorders National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | |
Collapse
|