1
|
Cai H, Dong J, Mei L, Feng G, Li L, Wang G, Yan H. Functional and structural abnormalities of the speech disorders: a multimodal activation likelihood estimation meta-analysis. Cereb Cortex 2024; 34:bhae075. [PMID: 38466117 DOI: 10.1093/cercor/bhae075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 02/07/2024] [Accepted: 02/08/2024] [Indexed: 03/12/2024] Open
Abstract
Speech disorders are associated with different degrees of functional and structural abnormalities. However, the abnormalities associated with specific disorders, and the common abnormalities shown by all disorders, remain unclear. Herein, a meta-analysis was conducted to integrate the results of 70 studies that compared 1843 speech disorder patients (dysarthria, dysphonia, stuttering, and aphasia) to 1950 healthy controls in terms of brain activity, functional connectivity, gray matter, and white matter fractional anisotropy. The analysis revealed that compared to controls, the dysarthria group showed higher activity in the left superior temporal gyrus and lower activity in the left postcentral gyrus. The dysphonia group had higher activity in the right precentral and postcentral gyrus. The stuttering group had higher activity in the right inferior frontal gyrus and lower activity in the left inferior frontal gyrus. The aphasia group showed lower activity in the bilateral anterior cingulate gyrus and left superior frontal gyrus. Across the four disorders, there were concurrent lower activity, gray matter, and fractional anisotropy in motor and auditory cortices, and stronger connectivity between the default mode network and frontoparietal network. These findings enhance our understanding of the neural basis of speech disorders, potentially aiding clinical diagnosis and intervention.
Collapse
Affiliation(s)
- Hao Cai
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an 710128, China
| | - Jie Dong
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an 710128, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University); School of Psychology; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Genyi Feng
- Imaging Department, Xi'an GEM Flower Changqing Hospital, Xi'an 710201, China
| | - Lili Li
- Speech Language Therapy Department, Shaanxi Provincial Rehabilitation Hospital, Xi'an 710065, China
| | - Gang Wang
- Imaging Department, Xi'an GEM Flower Changqing Hospital, Xi'an 710201, China
| | - Hao Yan
- Key Laboratory for Artificial Intelligence and Cognitive Neuroscience of Language, Xi'an International Studies University, Xi'an 710128, China
| |
Collapse
|
2
|
Wertz J, Rüttiger L, Bender B, Klose U, Stark RS, Dapper K, Saemisch J, Braun C, Singer W, Dalhoff E, Bader K, Wolpert SM, Knipper M, Munk MHJ. Differential cortical activation patterns: pioneering sub-classification of tinnitus with and without hyperacusis by combining audiometry, gamma oscillations, and hemodynamics. Front Neurosci 2024; 17:1232446. [PMID: 38239827 PMCID: PMC10794389 DOI: 10.3389/fnins.2023.1232446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/16/2023] [Indexed: 01/22/2024] Open
Abstract
The ongoing controversies about the neural basis of tinnitus, whether linked with central neural gain or not, may hamper efforts to develop therapies. We asked to what extent measurable audiometric characteristics of tinnitus without (T) or with co-occurrence of hyperacusis (TH) are distinguishable on the level of cortical responses. To accomplish this, electroencephalography (EEG) and concurrent functional near-infrared spectroscopy (fNIRS) were measured while patients performed an attentionally demanding auditory discrimination task using stimuli within the individual tinnitus frequency (fTin) and a reference frequency (fRef). Resting-state-fMRI-based functional connectivity (rs-fMRI-bfc) in ascending auditory nuclei (AAN), the primary auditory cortex (AC-I), and four other regions relevant for directing attention or regulating distress in temporal, parietal, and prefrontal cortex was compiled and compared to EEG and concurrent fNIRS activity in the same brain areas. We observed no group differences in pure-tone audiometry (PTA) between 10 and 16 kHz. However, the PTA threshold around the tinnitus pitch was positively correlated with the self-rated tinnitus loudness and also correlated with distress in T-groups, while TH experienced their tinnitus loudness at minimal loudness levels already with maximal suffering scores. The T-group exhibited prolonged auditory brain stem (ABR) wave I latency and reduced ABR wave V amplitudes (indicating reduced neural synchrony in the brainstem), which were associated with lower rs-fMRI-bfc between AAN and the AC-I, as observed in previous studies. In T-subjects, these features were linked with elevated spontaneous and reduced evoked gamma oscillations and with reduced deoxygenated hemoglobin (deoxy-Hb) concentrations in response to stimulation with lower frequencies in temporal cortex (Brodmann area (BA) 41, 42, 22), implying less synchronous auditory responses during active auditory discrimination of reference frequencies. In contrast, in the TH-group gamma oscillations and hemodynamic responses in temporoparietal regions were reversed during active discrimination of tinnitus frequencies. Our findings suggest that T and TH differ in auditory discrimination and memory-dependent directed attention during active discrimination at either tinnitus or reference frequencies, offering a test paradigm that may allow for more precise sub-classification of tinnitus and future improved treatment approaches.
Collapse
Affiliation(s)
- Jakob Wertz
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Lukas Rüttiger
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Benjamin Bender
- Department of Diagnostic and Interventional Neuroradiology, University of Tübingen, Tübingen, Germany
| | - Uwe Klose
- Department of Diagnostic and Interventional Neuroradiology, University of Tübingen, Tübingen, Germany
| | - Robert S. Stark
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Konrad Dapper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
- Department of Biology, Technical University Darmstadt, Darmstadt, Germany
| | - Jörg Saemisch
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | | | - Wibke Singer
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Ernst Dalhoff
- Section of Physiological Acoustics and Communication, Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Tübingen, Germany
| | - Katharina Bader
- Section of Physiological Acoustics and Communication, Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Tübingen, Germany
| | - Stephan M. Wolpert
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Matthias H. J. Munk
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Department of Biology, Technical University Darmstadt, Darmstadt, Germany
| |
Collapse
|
3
|
Pinner JFL, Collishaw W, Schendel ME, Flynn L, Candelaria‐Cook FT, Cerros CM, Williams M, Hill DE, Stephen JM. Examining the effects of prenatal alcohol exposure on performance of the sustained attention to response task in children with an FASD. Hum Brain Mapp 2023; 44:6120-6138. [PMID: 37792293 PMCID: PMC10619405 DOI: 10.1002/hbm.26501] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 06/07/2023] [Accepted: 09/10/2023] [Indexed: 10/05/2023] Open
Abstract
Prenatal alcohol exposure (PAE), the leading known cause of childhood developmental disability, has long-lasting effects extending throughout the lifespan. It is well documented that children prenatally exposed to alcohol have difficulties inhibiting behavior and sustaining attention. Thus, the Sustained Attention to Response Task (SART), a Go/No-go paradigm, is especially well suited to assess the behavioral and neural functioning characteristics of children with PAE. In this study, we utilized neuropsychological assessment, parent/guardian questionnaires, and magnetoencephalography during SART random and fixed orders to assess characteristics of children 8-12 years old prenatally exposed to alcohol compared to typically developing children. Compared to neurotypical control children, children with a Fetal Alcohol Spectrum Disorder (FASD) diagnosis had significantly decreased performance on neuropsychological measures, had deficiencies in task-based performance, were rated as having increased Attention-Deficit/Hyperactivity Disorder (ADHD) behaviors and as having lower cognitive functioning by their caretakers, and had decreased peak amplitudes in Broadmann's Area 44 (BA44) during SART. Further, MEG peak amplitude in BA44 was found to be significantly associated with neuropsychological test results, parent/guardian questionnaires, and task-based performance such that decreased amplitude was associated with poorer performance. In exploratory analyses, we also found significant correlations between total cortical volume and MEG peak amplitude indicating that the reduced amplitude is likely related in part to reduced overall brain volume often reported in children with PAE. These findings show that children 8-12 years old with an FASD diagnosis have decreased amplitudes in BA44 during SART random order, and that these deficits are associated with multiple behavioral measures.
Collapse
Affiliation(s)
- J. F. L. Pinner
- Department of PsychologyUniversity of New MexicoAlbuquerqueNew MexicoUSA
| | - W. Collishaw
- The Mind Research NetworkAlbuquerqueNew MexicoUSA
| | | | - L. Flynn
- The Mind Research NetworkAlbuquerqueNew MexicoUSA
| | | | - C. M. Cerros
- Department of PediatricsUniversity of New Mexico Health Sciences CenterAlbuquerqueNew MexicoUSA
| | - M. Williams
- Department of PediatricsUniversity of New Mexico Health Sciences CenterAlbuquerqueNew MexicoUSA
| | - D. E. Hill
- Department of PsychiatryUniversity of New Mexico Health Sciences CenterAlbuquerqueNew MexicoUSA
| | | |
Collapse
|
4
|
Andronoglou C, Konstantakopoulos G, Simoudi C, Kasselimis D, Evdokimidis I, Tsoukas E, Tsolakopoulos D, Angelopoulou G, Potagas C. Is There a Role of Inferior Frontal Cortex in Motor Timing? A Study of Paced Finger Tapping in Patients with Non-Fluent Aphasia. NEUROSCI 2023; 4:235-246. [PMID: 39483196 PMCID: PMC11523711 DOI: 10.3390/neurosci4030020] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/16/2023] [Accepted: 08/22/2023] [Indexed: 11/03/2024] Open
Abstract
The aim of the present study was to investigate the deficits in timing reproduction in individuals with non-fluent aphasia after a left hemisphere lesion including the inferior frontal gyrus, in which Broca's region is traditionally localised. Eighteen stroke patients with non-fluent aphasia and twenty-two healthy controls were recruited. We used a finger-tapping Test, which consisted of the synchronisation and the continuation phase with three fixed intervals (450 ms, 650 ms and 850 ms). Participants firstly had to tap simultaneously with the device's auditory stimuli (clips) (synchronisation phase) and then continue their tapping in the same pace when the stimuli were absent (continuation phase). Patients with aphasia demonstrated less accuracy and greater variability during reproduction in both phases, compared to healthy participants. More specifically, in the continuation phase, individuals with aphasia reproduced longer intervals than the targets, whereas healthy participants displayed accelerated responses. Moreover, patients' timing variability was greater in the absence of the auditory stimuli. This could possibly be attributed to deficient mental representation of intervals and not experiencing motor difficulties (due to left hemisphere stroke), as the two groups did not differ in tapping reproduction with either hand. Given that previous findings suggest a potential link between the IFG, timing and working memory, we argue that patients' extra-linguistic cognitive impairments should be accounted for, as possible contributing factors to timing disturbances.
Collapse
Affiliation(s)
- Chrysanthi Andronoglou
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
| | - George Konstantakopoulos
- First Department of Psychiatry, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, Greece
- Research Department of Clinical, Education and Health Psychology, University College London, London WC1E 6JB, UK
| | - Christina Simoudi
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
- Multisensory and Temporal Processing Laboratory (MultiTimeLab), Department of Psychology, Panteion University of Social and Political Sciences, 136 Syngrou Ave., 176 71 Athens, Greece
| | - Dimitrios Kasselimis
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
- Department of Psychology, Panteion University of Social and Political Sciences, 176 71 Athens, Greece
| | - Ioannis Evdokimidis
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
| | - Evangelos Tsoukas
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
| | - Dimitrios Tsolakopoulos
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
| | - Georgia Angelopoulou
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
| | - Constantin Potagas
- Neuropsychology and Language Disorders Unit, First Department of Neurology, Eginition Hospital, National and Kapodistrian University of Athens, 115 28 Athens, (C.P.)
| |
Collapse
|
5
|
ten Oever S, Carta S, Kaufeld G, Martin AE. Neural tracking of phrases in spoken language comprehension is automatic and task-dependent. eLife 2022; 11:e77468. [PMID: 35833919 PMCID: PMC9282854 DOI: 10.7554/elife.77468] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/25/2022] [Indexed: 12/02/2022] Open
Abstract
Linguistic phrases are tracked in sentences even though there is no one-to-one acoustic phrase marker in the physical signal. This phenomenon suggests an automatic tracking of abstract linguistic structure that is endogenously generated by the brain. However, all studies investigating linguistic tracking compare conditions where either relevant information at linguistic timescales is available, or where this information is absent altogether (e.g., sentences versus word lists during passive listening). It is therefore unclear whether tracking at phrasal timescales is related to the content of language, or rather, results as a consequence of attending to the timescales that happen to match behaviourally relevant information. To investigate this question, we presented participants with sentences and word lists while recording their brain activity with magnetoencephalography (MEG). Participants performed passive, syllable, word, and word-combination tasks corresponding to attending to four different rates: one they would naturally attend to, syllable-rates, word-rates, and phrasal-rates, respectively. We replicated overall findings of stronger phrasal-rate tracking measured with mutual information for sentences compared to word lists across the classical language network. However, in the inferior frontal gyrus (IFG) we found a task effect suggesting stronger phrasal-rate tracking during the word-combination task independent of the presence of linguistic structure, as well as stronger delta-band connectivity during this task. These results suggest that extracting linguistic information at phrasal rates occurs automatically with or without the presence of an additional task, but also that IFG might be important for temporal integration across various perceptual domains.
Collapse
Affiliation(s)
- Sanne ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive NeuroimagingNijmegenNetherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastrichtNetherlands
| | - Sara Carta
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- ADAPT Centre, School of Computer Science and Statistics, University of Dublin, Trinity CollegeDublinIreland
- CIMeC - Center for Mind/Brain Sciences, University of TrentoTrentoItaly
| | - Greta Kaufeld
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems group, Max Planck Institute for PsycholinguisticsNijmegenNetherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive NeuroimagingNijmegenNetherlands
| |
Collapse
|
6
|
Hoddinott JD, Schuit D, Grahn JA. Comparisons between short-term memory systems for verbal and rhythmic stimuli. Neuropsychologia 2021; 163:108080. [PMID: 34728240 DOI: 10.1016/j.neuropsychologia.2021.108080] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 10/26/2021] [Accepted: 10/29/2021] [Indexed: 11/17/2022]
Abstract
Auditory short-term memory is often conceived of as a unitary capacity, with memory for different auditory materials (such as syllables, pitches, rhythms) posited to rely on similar neural mechanisms. One spontaneous behavior observed in short-term memory studies is 'chunking'. For example, individuals often recount digit sequences in groups, or chunks, of 3-4 digits, and chunking is associated with better performance. Chunking may also operate in musical rhythm, with beats acting as potential chunk boundaries for tones in rhythmic sequences. Similar to chunking, beat-based structure in rhythms also improves performance. Thus, it is possible that beat processing relies on the same mechanisms that underlie chunking of verbal material. The current fMRI study examined whether beat perception is indeed a type of chunking, measuring brain responses to chunked and 'unchunked' letter sequences relative to beat-based and non-beat-based rhythmic sequences. Participants completed a sequence discrimination task, and comparisons between stimulus encoding, maintenance, and discrimination were made for both rhythmic and verbal sequences. Overall, rhythm and verbal short-term memory networks overlapped substantially. When contrasting rhythmic and verbal conditions, rhythms activated basal ganglia, supplementary motor area, and anterior insula more than letter strings did, during both encoding and discrimination. Verbal letter strings activated bilateral auditory cortex more than rhythms did during encoding, and parietal cortex, precuneus, and middle frontal gyri more than rhythms did during discrimination. Importantly, there was a significant interaction in the basal ganglia during encoding: activation for beat-based rhythms was greater than for non-beat-based rhythms, but verbal chunked and unchunked conditions did not differ. The interaction indicates that beat perception is not simply a case of chunking, suggesting a dissociation between beat processing and chunking-based grouping mechanisms.
Collapse
Affiliation(s)
- Joshua D Hoddinott
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; Neuroscience Program, University of Western Ontario, London, Ontario, Canada
| | - Dirk Schuit
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| | - Jessica A Grahn
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; Department of Psychology, University of Western Ontario, London, Ontario Canada.
| |
Collapse
|
7
|
Jayakody DMP, Menegola HK, Yiannos JM, Goodman-Simpson J, Friedland PL, Taddei K, Laws SM, Weinborn M, Martins RN, Sohrabi HR. The Peripheral Hearing and Central Auditory Processing Skills of Individuals With Subjective Memory Complaints. Front Neurosci 2020; 14:888. [PMID: 32982675 PMCID: PMC7475691 DOI: 10.3389/fnins.2020.00888] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 07/30/2020] [Indexed: 11/22/2022] Open
Abstract
Purpose This study examined the central auditory processing (CAP) assessment results of adults between 45 and 85 years of age with probable pre-clinical Alzheimer’s disease – i.e., individuals with subjective memory complaints (SMCs) as compared to those who were not reporting significant levels of memory complaints (non-SMCs). It was hypothesized that the SMC group would perform significantly poorer on tests of central auditory skills compared to participants with non-SMCs (control group). Methods A total of 95 participants were recruited from the larger Western Australia Memory Study and were classified as SMCs (N = 61; 20 males and 41 females, mean age 71.47 ±7.18 years) and non-SMCs (N = 34; 10 males, 24 females, mean age 68.85 ±7.69 years). All participants completed a peripheral hearing assessment, a CAP assessment battery including Dichotic Digits, Duration Pattern Test, Dichotic Sentence Identification, Synthetic Sentence Identification with Ipsilateral Competing Message (SSI-ICM) and the Quick-Speech-in-Noise, and a cognitive screening assessment. Results The SMCs group performed significantly poorer than the control group on SSI-ICM −10 and −20 dB signal-to-noise conditions. No significant differences were found between the two groups on the peripheral hearing threshold measurements and other CAP assessments. Conclusions The results suggest that individuals with SMCs perform poorly on specific CAP assessments in comparison to the controls. The poor CAP in SMC individuals may result in a higher cost to their finite pool of cognitive resources. The CAP results provide yet another biomarker that supports the hypothesis that SMCs may be a primary indication of neuropathological changes in the brain. Longitudinal follow up of individuals with SMCs, and decreased CAP abilities should inform whether this group is at higher risk of developing dementia as compared to non-SMCs and those SMC individuals without CAP difficulties.
Collapse
Affiliation(s)
- Dona M P Jayakody
- Ear Science Institute Australia, Subiaco, WA, Australia.,Ear Sciences Centre Faculty of Health and Medical Sciences, The University of Western Australia, Crawley, WA, Australia
| | | | - Jessica M Yiannos
- Ear Science Institute Australia, Subiaco, WA, Australia.,School of Human Sciences, The University of Western Australia, Crawley, WA, Australia
| | | | - Peter L Friedland
- Department of Otolaryngology Head Neck Skull Base Surgery, Sir Charles Gairdner Hospital, Nedlands, WA, Australia.,School of Medicine, University Notre Dame, Fremantle, WA, Australia
| | - Kevin Taddei
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia
| | - Simon M Laws
- Collaborative Genomics Group, School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia.,School of Pharmacy and Biomedical Sciences, Faculty of Health Sciences, Curtin Health Innovation Research Institute, Curtin University, Bentley, WA, Australia
| | - Michael Weinborn
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia.,School of Psychological Science, The University of Western Australia, Nedlands, WA, Australia
| | - Ralph N Martins
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia.,Department of Biomedical Sciences, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Hamid R Sohrabi
- School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA, Australia.,Department of Biomedical Sciences, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia.,Centre for Healthy Ageing, School of Psychology and Exercise Science, Murdoch University, Murdoch, WA, Australia
| |
Collapse
|
8
|
Wang F, Karipidis II, Pleisch G, Fraga-González G, Brem S. Development of Print-Speech Integration in the Brain of Beginning Readers With Varying Reading Skills. Front Hum Neurosci 2020; 14:289. [PMID: 32922271 PMCID: PMC7457077 DOI: 10.3389/fnhum.2020.00289] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 06/26/2020] [Indexed: 12/13/2022] Open
Abstract
Learning print-speech sound correspondences is a crucial step at the beginning of reading acquisition and often impaired in children with developmental dyslexia. Despite increasing insight into audiovisual language processing, it remains largely unclear how integration of print and speech develops at the neural level during initial learning in the first years of schooling. To investigate this development, 32 healthy, German-speaking children at varying risk for developmental dyslexia (17 typical readers and 15 poor readers) participated in a longitudinal study including behavioral and fMRI measurements in first (T1) and second (T2) grade. We used an implicit audiovisual (AV) non-word target detection task aimed at characterizing differential activation to congruent (AVc) and incongruent (AVi) audiovisual non-word pairs. While children’s brain activation did not differ between AVc and AVi pairs in first grade, an incongruency effect (AVi > AVc) emerged in bilateral inferior temporal and superior frontal gyri in second grade. Of note, pseudoword reading performance improvements with time were associated with the development of the congruency effect (AVc > AVi) in the left posterior superior temporal gyrus (STG) from first to second grade. Finally, functional connectivity analyses indicated divergent development and reading expertise dependent coupling from the left occipito-temporal and superior temporal cortex to regions of the default mode (precuneus) and fronto-temporal language networks. Our results suggest that audiovisual integration areas as well as their functional coupling to other language areas and areas of the default mode network show a different development in poor vs. typical readers at varying familial risk for dyslexia.
Collapse
Affiliation(s)
- Fang Wang
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland.,Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Iliana I Karipidis
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland.,Center for Interdisciplinary Brain Sciences Research, Department of Psychiatry and Behavioral Sciences, School of Medicine, Stanford University, Stanford, CA, United States
| | - Georgette Pleisch
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland
| | - Gorka Fraga-González
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland
| | - Silvia Brem
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital of Psychiatry, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zürich, Zurich, Switzerland
| |
Collapse
|
9
|
Torres-Prioris MJ, López-Barroso D, Càmara E, Fittipaldi S, Sedeño L, Ibáñez A, Berthier ML, García AM. Neurocognitive signatures of phonemic sequencing in expert backward speakers. Sci Rep 2020; 10:10621. [PMID: 32606382 PMCID: PMC7326922 DOI: 10.1038/s41598-020-67551-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 06/10/2020] [Indexed: 11/09/2022] Open
Abstract
Despite its prolific growth, neurolinguistic research on phonemic sequencing has largely neglected the study of individuals with highly developed skills in this domain. To bridge this gap, we report multidimensional signatures of two experts in backward speech, that is, the capacity to produce utterances by reversing the order of phonemes while retaining their identity. Our approach included behavioral assessments of backward and forward speech alongside neuroimaging measures of voxel-based morphometry, diffusion tensor imaging, and resting-state functional connectivity. Relative to controls, both backward speakers exhibited behavioral advantages for reversing words and sentences of varying complexity, irrespective of working memory skills. These patterns were accompanied by increased grey matter volume, higher mean diffusivity, and enhanced functional connectivity along dorsal and ventral stream regions mediating phonological and other linguistic operations, with complementary support of areas subserving associative-visual and domain-general processes. Still, the specific loci of these neural patterns differed between both subjects, suggesting individual variability in the correlates of expert backward speech. Taken together, our results offer new vistas on the domain of phonemic sequencing, while illuminating neuroplastic patterns underlying extraordinary language abilities.
Collapse
Affiliation(s)
- María José Torres-Prioris
- Cognitive Neurology and Aphasia Unit, Centro de Investigaciones Médico-Sanitarias, Instituto de Investigación Biomédica de Málaga (IBIMA), University of Malaga, Malaga, Spain.,Area of Psychobiology, Faculty of Psychology and Speech Therapy, University of Malaga, Malaga, Spain
| | - Diana López-Barroso
- Cognitive Neurology and Aphasia Unit, Centro de Investigaciones Médico-Sanitarias, Instituto de Investigación Biomédica de Málaga (IBIMA), University of Malaga, Malaga, Spain.,Area of Psychobiology, Faculty of Psychology and Speech Therapy, University of Malaga, Malaga, Spain
| | - Estela Càmara
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, Spain
| | - Sol Fittipaldi
- Universidad de San Andrés, Vito Dumas 284, B1644BID Victoria, Buenos Aires, Argentina.,National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
| | - Lucas Sedeño
- National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina
| | - Agustín Ibáñez
- Universidad de San Andrés, Vito Dumas 284, B1644BID Victoria, Buenos Aires, Argentina.,National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina.,Universidad Autónoma del Caribe, Barranquilla, Colombia.,Center for Social and Cognitive Neuroscience (CSCN), School of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile.,Global Brain Health Institute, University of California, San Francisco, United States
| | - Marcelo L Berthier
- Cognitive Neurology and Aphasia Unit, Centro de Investigaciones Médico-Sanitarias, Instituto de Investigación Biomédica de Málaga (IBIMA), University of Malaga, Malaga, Spain
| | - Adolfo M García
- Universidad de San Andrés, Vito Dumas 284, B1644BID Victoria, Buenos Aires, Argentina. .,National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina. .,Global Brain Health Institute, University of California, San Francisco, United States. .,Faculty of Education, National University of Cuyo (UNCuyo), Mendoza, Argentina. .,Departamento de Lingüística Y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Santiago, Chile.
| |
Collapse
|
10
|
Abstract
Syntax, the structure of sentences, enables humans to express an infinite range of meanings through finite means. The neurobiology of syntax has been intensely studied but with little consensus. Two main candidate regions have been identified: the posterior inferior frontal gyrus (pIFG) and the posterior middle temporal gyrus (pMTG). Integrating research in linguistics, psycholinguistics, and neuroscience, we propose a neuroanatomical framework for syntax that attributes distinct syntactic computations to these regions in a unified model. The key theoretical advances are adopting a modern lexicalized view of syntax in which the lexicon and syntactic rules are intertwined, and recognizing a computational asymmetry in the role of syntax during comprehension and production. Our model postulates a hierarchical lexical-syntactic function to the pMTG, which interconnects previously identified speech perception and conceptual-semantic systems in the temporal and inferior parietal lobes, crucial for both sentence production and comprehension. These relational hierarchies are transformed via the pIFG into morpho-syntactic sequences, primarily tied to production. We show how this architecture provides a better account of the full range of data and is consistent with recent proposals regarding the organization of phonological processes in the brain.
Collapse
Affiliation(s)
- William Matchin
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, 29208, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, 92697, USA
- Department of Language Science, University of California, Irvine, Irvine, CA, 92697, USA
| |
Collapse
|
11
|
Jarret T, Stockert A, Kotz SA, Tillmann B. Implicit learning of artificial grammatical structures after inferior frontal cortex lesions. PLoS One 2019; 14:e0222385. [PMID: 31539390 PMCID: PMC6754135 DOI: 10.1371/journal.pone.0222385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Accepted: 08/29/2019] [Indexed: 12/02/2022] Open
Abstract
OBJECTIVE Previous research associated the left inferior frontal cortex with implicit structure learning. The present study tested patients with lesions encompassing the left inferior frontal gyrus (LIFG; including Brodmann areas 44 and 45) to further investigate this cognitive function, notably by using non-verbal material, implicit investigation methods, and by enhancing potential remaining function via dynamic attending. Patients and healthy matched controls were exposed to an artificial pitch grammar in an implicit learning paradigm to circumvent the potential influence of impaired language processing. METHODS Patients and healthy controls listened to pitch sequences generated within a finite-state grammar (exposure phase) and then performed a categorization task on new pitch sequences (test phase). Participants were not informed about the underlying grammar in either the exposure phase or the test phase. Furthermore, the pitch structures were presented in a highly regular temporal context as the beneficial impact of temporal regularity (e.g. meter) in learning and perception has been previously reported. Based on the Dynamic Attending Theory (DAT), we hypothesized that a temporally regular context helps developing temporal expectations that, in turn, facilitate event perception, and thus benefit artificial grammar learning. RESULTS Electroencephalography results suggest preserved artificial grammar learning of pitch structures in patients and healthy controls. For both groups, analyses of event-related potentials revealed a larger early negativity (100-200 msec post-stimulus onset) in response to ungrammatical than grammatical pitch sequence events. CONCLUSIONS These findings suggest that (i) the LIFG does not play an exclusive role in the implicit learning of artificial pitch grammars, and (ii) the use of non-verbal material and an implicit task reveals cognitive capacities that remain intact despite lesions to the LIFG. These results provide grounds for training and rehabilitation, that is, learning of non-verbal grammars that may impact the relearning of verbal grammars.
Collapse
Affiliation(s)
- Tatiana Jarret
- CNRS, UMR5292, INSERM, U1028, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| | - Anika Stockert
- Language and Aphasia Laboratory, Department of Neurology, University of Leipzig, Leipzig, Germany
| | - Sonja A. Kotz
- Dept. of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Psychology and Neuroscience, Dept. of Neuropsychology, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology and Neuroscience, Dept. of Psychopharmacology, Maastricht University, Maastricht, The Netherlands
| | - Barbara Tillmann
- CNRS, UMR5292, INSERM, U1028, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| |
Collapse
|
12
|
Basilakos A, Smith KG, Fillmore P, Fridriksson J, Fedorenko E. Functional Characterization of the Human Speech Articulation Network. Cereb Cortex 2019; 28:1816-1830. [PMID: 28453613 DOI: 10.1093/cercor/bhx100] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Accepted: 04/05/2017] [Indexed: 12/14/2022] Open
Abstract
A number of brain regions have been implicated in articulation, but their precise computations remain debated. Using functional magnetic resonance imaging, we examine the degree of functional specificity of articulation-responsive brain regions to constrain hypotheses about their contributions to speech production. We find that articulation-responsive regions (1) are sensitive to articulatory complexity, but (2) are largely nonoverlapping with nearby domain-general regions that support diverse goal-directed behaviors. Furthermore, premotor articulation regions show selectivity for speech production over some related tasks (respiration control), but not others (nonspeech oral-motor [NSO] movements). This overlap between speech and nonspeech movements concords with electrocorticographic evidence that these regions encode articulators and their states, and with patient evidence whereby articulatory deficits are often accompanied by oral-motor deficits. In contrast, the superior temporal regions show strong selectivity for articulation relative to nonspeech movements, suggesting that these regions play a specific role in speech planning/production. Finally, articulation-responsive portions of posterior inferior frontal gyrus show some selectivity for articulation, in line with the hypothesis that this region prepares an articulatory code that is passed to the premotor cortex. Taken together, these results inform the architecture of the human articulation system.
Collapse
Affiliation(s)
- Alexandra Basilakos
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| | - Kimberly G Smith
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA.,Department of Speech Pathology and Audiology, University of South Alabama, Mobile, AL 36688, USA
| | - Paul Fillmore
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX 76798, USA
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| | - Evelina Fedorenko
- Department of Psychiatry, Harvard Medical School, Boston, MA 02115, USA.,Department of Psychiatry, Massachusetts General Hospital, Boston, MA 02114, USA.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
13
|
Mencarelli L, Neri F, Momi D, Menardi A, Rossi S, Rossi A, Santarnecchi E. Stimuli, presentation modality, and load-specific brain activity patterns during n-back task. Hum Brain Mapp 2019; 40:3810-3831. [PMID: 31179585 DOI: 10.1002/hbm.24633] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 04/15/2019] [Accepted: 05/07/2019] [Indexed: 01/08/2023] Open
Abstract
Working memory (WM) refers to a set of cognitive processes that allows for the temporary storage and manipulation of information, crucial for everyday life skills. WM deficits are present in several neurological, psychiatric, and neurodevelopmental disorders, thus making the full understanding of its neural correlates a key aspect for the implementation of cognitive training interventions. Here, we present a quantitative meta-analysis focusing on the underlying neural substrates upon which the n-back, one of the most commonly used tasks for WM assessment, is believed to rely on, as highlighted by functional magnetic resonance imaging and positron emission tomography findings. Relevant published work was scrutinized through the activation likelihood estimate (ALE) statistical framework in order to generate a set of task-specific activation maps, according to n-back difficulty. Our results confirm the known involvement of frontoparietal areas across different types of n-back tasks, as well as the recruitment of subcortical structures, cerebellum and precuneus. Specific activations maps for four stimuli types, six presentation modalities, three WM loads and their combination are provided and discussed. Moreover, functional overlap with resting-state networks highlighted a strong similarity between n-back nodes and the Dorsal Attention Network, with less overlap with other networks like Salience, Language, and Sensorimotor ones. Additionally, neural deactivations during n-back tasks and their functional connectivity profile were examined. Clinical and functional implications are discussed in the context of potential noninvasive brain stimulation and cognitive enhancement/rehabilitation programs.
Collapse
Affiliation(s)
- Lucia Mencarelli
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy
| | - Francesco Neri
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy
| | - Davide Momi
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy
| | - Arianna Menardi
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy
| | - Simone Rossi
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy.,Siena Robotics and Systems Lab (SIRS-Lab), Engineering and Mathematics Department, University of Siena, Siena, Italy.,Human Physiology Section, Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, Italy
| | - Alessandro Rossi
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy.,Human Physiology Section, Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, Italy
| | - Emiliano Santarnecchi
- Siena Brain Investigation & Neuromodulation Lab (Si-BIN Lab), Department of Medicine, Surgery and Neuroscience, Neurology and Clinical Neurophysiology Section, University of Siena, Siena, Italy.,Berenson-Allen Center for Non-Invasive Brain Stimulation, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
14
|
Within the framework of the dual-system model, voluntary action is central to cognition. Atten Percept Psychophys 2019; 81:2192-2216. [PMID: 31062301 DOI: 10.3758/s13414-019-01737-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A new version of the dual-system hypothesis is described. Consistent with earlier models, the improvisational subsystem of the instrumental system, which includes the occipital cortex, inferior temporal cortex, and medial temporal cortex, especially the hippocampus, directs the construction of visual representations of the world and constructs ad-hoc responses to novel targets. The habit system, which includes the occipital cortex; parietal cortex; premotor, supplementary motor, and ventrolateral areas of frontal cortex; and the basal ganglia, especially the caudate nucleus, encodes sequences of actions and generates previously successful actions to familiar targets. However, unlike in previous dual-system models, human cognitive activity involved in task performance is not exclusively associated with one system or the other. Rather, the two systems make it possible for people to learn a variety of skills that draw on the competencies of both systems. The collective effects of these skills define human cognition. So, in contrast with earlier versions of the dual-system hypothesis, which identified the habit system solely with procedural learning and implicit improvements in task performance, the model presented here attributes a direct role in declarative-memory tasks to the habit system. Furthermore, within the model, the computational competencies of the two systems are used to construct purposeful sequences of actions-that is, skills. Human cognition is the result of the performance of these skills. Thus, voluntary action is central to human cognition.
Collapse
|
15
|
Savill NJ, Cornelissen P, Pahor A, Jefferies E. rTMS evidence for a dissociation in short-term memory for spoken words and nonwords. Cortex 2019; 112:5-22. [DOI: 10.1016/j.cortex.2018.07.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 06/26/2018] [Accepted: 07/27/2018] [Indexed: 10/28/2022]
|
16
|
Mugler EM, Tate MC, Livescu K, Templer JW, Goldrick MA, Slutzky MW. Differential Representation of Articulatory Gestures and Phonemes in Precentral and Inferior Frontal Gyri. J Neurosci 2018; 38:9803-9813. [PMID: 30257858 PMCID: PMC6234299 DOI: 10.1523/jneurosci.1206-18.2018] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 09/09/2018] [Accepted: 09/10/2018] [Indexed: 11/21/2022] Open
Abstract
Speech is a critical form of human communication and is central to our daily lives. Yet, despite decades of study, an understanding of the fundamental neural control of speech production remains incomplete. Current theories model speech production as a hierarchy from sentences and phrases down to words, syllables, speech sounds (phonemes), and the actions of vocal tract articulators used to produce speech sounds (articulatory gestures). Here, we investigate the cortical representation of articulatory gestures and phonemes in ventral precentral and inferior frontal gyri in men and women. Our results indicate that ventral precentral cortex represents gestures to a greater extent than phonemes, while inferior frontal cortex represents both gestures and phonemes. These findings suggest that speech production shares a common cortical representation with that of other types of movement, such as arm and hand movements. This has important implications both for our understanding of speech production and for the design of brain-machine interfaces to restore communication to people who cannot speak.SIGNIFICANCE STATEMENT Despite being studied for decades, the production of speech by the brain is not fully understood. In particular, the most elemental parts of speech, speech sounds (phonemes) and the movements of vocal tract articulators used to produce these sounds (articulatory gestures), have both been hypothesized to be encoded in motor cortex. Using direct cortical recordings, we found evidence that primary motor and premotor cortices represent gestures to a greater extent than phonemes. Inferior frontal cortex (part of Broca's area) appears to represent both gestures and phonemes. These findings suggest that speech production shares a similar cortical organizational structure with the movement of other body parts.
Collapse
Affiliation(s)
| | | | - Karen Livescu
- Toyota Technological Institute at Chicago, Chicago, Illinois 60637
| | | | | | - Marc W Slutzky
- Departments of Neurology,
- Physiology
- Physical Medicine & Rehabilitation, Northwestern University, Chicago, Illinois 60611, and
| |
Collapse
|
17
|
Okada K, Matchin W, Hickok G. Phonological Feature Repetition Suppression in the Left Inferior Frontal Gyrus. J Cogn Neurosci 2018; 30:1549-1557. [PMID: 29877763 DOI: 10.1162/jocn_a_01287] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Models of speech production posit a role for the motor system, predominantly the posterior inferior frontal gyrus, in encoding complex phonological representations for speech production, at the phonemic, syllable, and word levels [Roelofs, A. A dorsal-pathway account of aphasic language production: The WEAVER++/ARC model. Cortex, 59(Suppl. C), 33-48, 2014; Hickok, G. Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135-145, 2012; Guenther, F. H. Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39, 350-365, 2006]. However, phonological theory posits subphonemic units of representation, namely phonological features [Chomsky, N., & Halle, M. The sound pattern of English, 1968; Jakobson, R., Fant, G., & Halle, M. Preliminaries to speech analysis. The distinctive features and their correlates. Cambridge, MA: MIT Press, 1951], that specify independent articulatory parameters of speech sounds, such as place and manner of articulation. Therefore, motor brain systems may also incorporate phonological features into speech production planning units. Here, we add support for such a role with an fMRI experiment of word sequence production using a phonemic similarity manipulation. We adapted and modified the experimental paradigm of Oppenheim and Dell [Oppenheim, G. M., & Dell, G. S. Inner speech slips exhibit lexical bias, but not the phonemic similarity effect. Cognition, 106, 528-537, 2008; Oppenheim, G. M., & Dell, G. S. Motor movement matters: The flexible abstractness of inner speech. Memory & Cognition, 38, 1147-1160, 2010]. Participants silently articulated words cued by sequential visual presentation that varied in degree of phonological feature overlap in consonant onset position: high overlap (two shared phonological features; e.g., /r/ and /l/) or low overlap (one shared phonological feature, e.g., /r/ and /b/). We found a significant repetition suppression effect in the left posterior inferior frontal gyrus, with increased activation for phonologically dissimilar words compared with similar words. These results suggest that phonemes, particularly phonological features, are part of the planning units of the motor speech system.
Collapse
|
18
|
Grossberg S. Desirability, availability, credit assignment, category learning, and attention: Cognitive-emotional and working memory dynamics of orbitofrontal, ventrolateral, and dorsolateral prefrontal cortices. Brain Neurosci Adv 2018; 2:2398212818772179. [PMID: 32166139 PMCID: PMC7058233 DOI: 10.1177/2398212818772179] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2017] [Accepted: 03/16/2018] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND The prefrontal cortices play an essential role in cognitive-emotional and working memory processes through interactions with multiple brain regions. METHODS This article further develops a unified neural architecture that explains many recent and classical data about prefrontal function and makes testable predictions. RESULTS Prefrontal properties of desirability, availability, credit assignment, category learning, and feature-based attention are explained. These properties arise through interactions of orbitofrontal, ventrolateral prefrontal, and dorsolateral prefrontal cortices with the inferotemporal cortex, perirhinal cortex, parahippocampal cortices; ventral bank of the principal sulcus, ventral prearcuate gyrus, frontal eye fields, hippocampus, amygdala, basal ganglia, hypothalamus, and visual cortical areas V1, V2, V3A, V4, middle temporal cortex, medial superior temporal area, lateral intraparietal cortex, and posterior parietal cortex. Model explanations also include how the value of visual objects and events is computed, which objects and events cause desired consequences and which may be ignored as predictively irrelevant, and how to plan and act to realise these consequences, including how to selectively filter expected versus unexpected events, leading to movements towards, and conscious perception of, expected events. Modelled processes include reinforcement learning and incentive motivational learning; object and spatial working memory dynamics; and category learning, including the learning of object categories, value categories, object-value categories, and sequence categories, or list chunks. CONCLUSION This article hereby proposes a unified neural theory of prefrontal cortex and its functions.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, Biomedical Engineering, Boston University, Boston, MA, USA
| |
Collapse
|
19
|
Birba A, García-Cordero I, Kozono G, Legaz A, Ibáñez A, Sedeño L, García AM. Losing ground: Frontostriatal atrophy disrupts language embodiment in Parkinson’s and Huntington’s disease. Neurosci Biobehav Rev 2017; 80:673-687. [DOI: 10.1016/j.neubiorev.2017.07.011] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 07/25/2017] [Accepted: 07/27/2017] [Indexed: 12/13/2022]
|
20
|
Involvement of the Left Supramarginal Gyrus in Manipulation Judgment Tasks: Contributions to Theories of Tool Use. J Int Neuropsychol Soc 2017. [PMID: 28625209 DOI: 10.1017/s1355617717000455] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
OBJECTIVES Two theories of tool use, namely the gesture engram and the technical reasoning theories, make distinct predictions about the involvement of the left inferior parietal lobe (IPL) in manipulation judgement tasks. The objective here is to test these alternative predictions based on previous studies on manipulation judgment tasks using transcranial magnetic stimulations (TMS) targeting the left supramarginal gyrus (SMG). METHODS We review recent TMS studies on manipulation judgement tasks and confront these data with predictions made by both tool use theories. RESULTS The left SMG is a highly intertwined region, organized following several functionally distinct areas and TMS may have disrupted a cortical network involved in the ability to use tools rather than only one functional area supporting manipulation knowledge. Moreover, manipulation judgement tasks may be impaired following virtual lesions outside the IPL. CONCLUSIONS These data are more in line with the technical reasoning hypothesis, which assumes that the left IPL does not store manipulation knowledge per se. (JINS, 2017, 23, 685-691).
Collapse
|
21
|
Kalm K, Norris D. Reading positional codes with fMRI: Problems and solutions. PLoS One 2017; 12:e0176585. [PMID: 28520725 PMCID: PMC5435169 DOI: 10.1371/journal.pone.0176585] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 04/12/2017] [Indexed: 01/18/2023] Open
Abstract
Neural mechanisms which bind items into sequences have been investigated in a large body of research in animal neurophysiology and human neuroimaging. However, a major problem in interpreting this data arises from a fact that several unrelated processes, such as memory load, sensory adaptation, and reward expectation, also change in a consistent manner as the sequence unfolds. In this paper we use computational simulations and data from two fMRI experiments to show that a host of unrelated neural processes can masquerade as sequence representations. We show that dissociating such unrelated processes from a dedicated sequence representation is an especially difficult problem for fMRI data, which is almost exclusively the modality used in human experiments. We suggest that such fMRI results must be treated with caution and in many cases the assumed neural representation might actually reflect unrelated processes.
Collapse
Affiliation(s)
- Kristjan Kalm
- Cognition and Brain Sciences Unit, Medical Research Council, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom
- * E-mail:
| | - Dennis Norris
- Cognition and Brain Sciences Unit, Medical Research Council, 15 Chaucer Road, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|
22
|
Nozari N, Mirman D, Thompson-Schill SL. The ventrolateral prefrontal cortex facilitates processing of sentential context to locate referents. BRAIN AND LANGUAGE 2016; 157-158:1-13. [PMID: 27148817 PMCID: PMC4974818 DOI: 10.1016/j.bandl.2016.04.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2015] [Revised: 04/03/2016] [Accepted: 04/10/2016] [Indexed: 05/24/2023]
Abstract
Left ventrolateral prefrontal cortex (VLPFC) has been implicated in both integration and conflict resolution in sentence comprehension. Most evidence in favor of the integration account comes from processing ambiguous or anomalous sentences, which also poses a demand for conflict resolution. In two eye-tracking experiments we studied the role of VLPFC in integration when demands for conflict resolution were minimal. Two closely-matched groups of individuals with chronic post-stroke aphasia were tested: the Anterior group had damage to left VLPFC, whereas the Posterior group had left temporo-parietal damage. In Experiment 1 a semantic cue (e.g., "She will eat the apple") uniquely marked the target (apple) among three distractors that were incompatible with the verb. In Experiment 2 phonological cues (e.g., "She will see an eagle."/"She will see a bear.") uniquely marked the target among three distractors whose onsets were incompatible with the cue (e.g., all consonants when the target started with a vowel). In both experiments, control conditions had a similar format, but contained no semantic or phonological contextual information useful for target integration (e.g., the verb "see", and the determiner "the"). All individuals in the Anterior group were slower in using both types of contextual information to locate the target than were individuals in the Posterior group. These results suggest a role for VLPFC in integration beyond conflict resolution. We discuss a framework that accommodates both integration and conflict resolution.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Department of Neurology, Johns Hopkins University School of Medicine, United States; Department of Cognitive Science, Johns Hopkins University, United States.
| | - Daniel Mirman
- Department of Psychology, Drexel University, United States; Moss Rehabilitation Research Institute, United States
| | | |
Collapse
|
23
|
Long MA, Katlowitz KA, Svirsky MA, Clary RC, Byun TM, Majaj N, Oya H, Howard MA, Greenlee JDW. Functional Segregation of Cortical Regions Underlying Speech Timing and Articulation. Neuron 2016; 89:1187-1193. [PMID: 26924439 PMCID: PMC4833207 DOI: 10.1016/j.neuron.2016.01.032] [Citation(s) in RCA: 96] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Revised: 12/20/2015] [Accepted: 01/08/2016] [Indexed: 02/03/2023]
Abstract
Spoken language is a central part of our everyday lives, but the precise roles that individual cortical regions play in the production of speech are often poorly understood. To address this issue, we focally lowered the temperature of distinct cortical regions in awake neurosurgical patients, and we relate this perturbation to changes in produced speech sequences. Using this method, we confirm that speech is highly lateralized, with the vast majority of behavioral effects seen on the left hemisphere. We then use this approach to demonstrate a clear functional dissociation between nearby cortical speech sites. Focal cooling of pars triangularis/pars opercularis (Broca's region) and the ventral portion of the precentral gyrus (speech motor cortex) resulted in the manipulation of speech timing and articulation, respectively. Our results support a class of models that have proposed distinct processing centers underlying motor sequencing and execution for speech.
Collapse
Affiliation(s)
- Michael A Long
- NYU Neuroscience Institute, Department of Otolaryngology, NYU Neuroscience Institute, New York University Langone Medical Center, New York, NY 10016 USA; Center for Neural Science, New York University, New York, NY 10003 USA.
| | - Kalman A Katlowitz
- NYU Neuroscience Institute, Department of Otolaryngology, NYU Neuroscience Institute, New York University Langone Medical Center, New York, NY 10016 USA; Center for Neural Science, New York University, New York, NY 10003 USA
| | - Mario A Svirsky
- NYU Neuroscience Institute, Department of Otolaryngology, NYU Neuroscience Institute, New York University Langone Medical Center, New York, NY 10016 USA; Center for Neural Science, New York University, New York, NY 10003 USA
| | - Rachel C Clary
- NYU Neuroscience Institute, Department of Otolaryngology, NYU Neuroscience Institute, New York University Langone Medical Center, New York, NY 10016 USA; Center for Neural Science, New York University, New York, NY 10003 USA
| | - Tara McAllister Byun
- Department of Communicative Sciences and Disorders, New York University, New York, NY 10012 USA
| | - Najib Majaj
- Center for Neural Science, New York University, New York, NY 10003 USA
| | - Hiroyuki Oya
- Department of Neurosurgery, Human Brain Research Lab, University of Iowa, Iowa City, IA 52242 USA
| | - Matthew A Howard
- Department of Neurosurgery, Human Brain Research Lab, University of Iowa, Iowa City, IA 52242 USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, Human Brain Research Lab, University of Iowa, Iowa City, IA 52242 USA
| |
Collapse
|
24
|
Thothathiri M, Rattinger M. Controlled processing during sequencing. Front Hum Neurosci 2015; 9:599. [PMID: 26578941 PMCID: PMC4624862 DOI: 10.3389/fnhum.2015.00599] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2015] [Accepted: 10/15/2015] [Indexed: 11/25/2022] Open
Abstract
Longstanding evidence has identified a role for the frontal cortex in sequencing within both linguistic and non-linguistic domains. More recently, neuropsychological studies have suggested a specific role for the left premotor-prefrontal junction (BA 44/6) in selection between competing alternatives during sequencing. In this study, we used neuroimaging with healthy adults to confirm and extend knowledge about the neural correlates of sequencing. Participants reproduced visually presented sequences of syllables and words using manual button presses. Items in the sequence were presented either consecutively or concurrently. Concurrent presentation is known to trigger the planning of multiple responses, which might compete with one another. Therefore, we hypothesized that regions involved in controlled processing would show greater recruitment during the concurrent than the consecutive condition. Whole-brain analysis showed concurrent > consecutive activation in sensory, motor and somatosensory cortices and notably also in rostral-dorsal anterior cingulate cortex. Region of interest analyses showed increased activation within left BA 44/6 and correlation between this region’s activation and behavioral response times. Functional connectivity analysis revealed increased connectivity between left BA 44/6 and the posterior lobe of the cerebellum during the concurrent than the consecutive condition. These results corroborate recent evidence and demonstrate the involvement of BA 44/6 and other control regions when ordering co-activated representations.
Collapse
Affiliation(s)
- Malathi Thothathiri
- Department of Speech and Hearing Science, The George Washington University, Washington DC, USA
| | - Michelle Rattinger
- Department of Speech and Hearing Science, The George Washington University, Washington DC, USA
| |
Collapse
|
25
|
Abstract
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.
Collapse
|
26
|
Poldrack RA. Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed? PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 5:753-61. [PMID: 25076977 DOI: 10.1177/1745691610388777] [Citation(s) in RCA: 95] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The goal of cognitive neuroscience is to identify the mapping between brain function and mental processing. In this article, I examine the strategies that have been used to identify such mappings and argue that they may be fundamentally unable to identify selective structure-function mappings. To understand the functional anatomy of mental processes, it will be necessary for researchers to move from the brain-mapping strategies that the field has employed toward a search for selective associations. This will require a greater focus on the structure of cognitive processes, which can be achieved through the development of formal ontologies that describe the structure of mental processes. In this article, I outline the Cognitive Atlas Project, which is developing such ontologies, and show how this knowledge could be used in conjunction with data-mining approaches to more directly relate mental processes and brain function.
Collapse
|
27
|
|
28
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
29
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
30
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
31
|
Hemispheric asymmetry in the formation of musical pitch expectations: a monaural listening and probe tone study. Neuropsychologia 2014; 65:37-40. [PMID: 25447063 DOI: 10.1016/j.neuropsychologia.2014.10.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2014] [Revised: 09/14/2014] [Accepted: 10/02/2014] [Indexed: 11/20/2022]
Abstract
This study investigated hemispheric asymmetry in the formation of musical pitch expectations by combining the monaural listening and probe tone paradigms. On each trial, adult participants heard a short context melody and a single pitch (i.e. a probe tone). Both the context and the probe tone were played in the left or right ear. The context was an ascending major scale or pitches from the major scale in a random order. Following each context, participants rated one of three probe tones for how well it fit with the context they just heard. Probe tones were one of two pitches from the major scale (the tonic or the supertonic) or an out-of-set pitch. Participants provided the highest ratings for the tonic, followed by the supertonic, followed by the out-of-set pitch. Ratings did not differ for the tonic or out-of-set pitch between the two ears, but participants provided lower ratings for the supertonic in the right ear. For the ascending context only, the difference in ratings between the tonic and supertonic was greater in the right ear. These results suggest that the left hemisphere differentiates the stability of pitches in a set by forming temporal expectations for specific, in-set pitches.
Collapse
|
32
|
Marian V, Chabal S, Bartolotti J, Bradley K, Hernandez AE. Differential recruitment of executive control regions during phonological competition in monolinguals and bilinguals. BRAIN AND LANGUAGE 2014; 139:108-17. [PMID: 25463821 PMCID: PMC4363210 DOI: 10.1016/j.bandl.2014.10.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Revised: 09/03/2014] [Accepted: 10/13/2014] [Indexed: 05/30/2023]
Abstract
Behavioral research suggests that monolinguals and bilinguals differ in how they manage within-language phonological competition when listening to language. The current study explored whether bilingual experience might also change the neural resources recruited to control spoken-word competition. Seventeen Spanish-English bilinguals and eighteen English monolinguals completed an fMRI task in which they searched for a picture representing an aurally presented word (e.g., "candy") from an array of four presented images. On competitor trials, one of the objects in the display shared initial phonological overlap with the target (e.g., candle). While both groups experienced competition and responded more slowly on competitor trials than on unrelated trials, fMRI data suggest that monolinguals, but not bilinguals, activated executive control regions (e.g., anterior cingulate, superior frontal gyrus) during within-language phonological competition. We conclude that differences in how monolinguals and bilinguals manage competition may result from bilinguals' more efficient deployment of neural resources.
Collapse
Affiliation(s)
| | - Sarah Chabal
- Northwestern University, Evanston, IL, United States
| | | | | | | |
Collapse
|
33
|
Ruck L. Manual praxis in stone tool manufacture: implications for language evolution. BRAIN AND LANGUAGE 2014; 139:68-83. [PMID: 25463818 DOI: 10.1016/j.bandl.2014.10.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2014] [Revised: 09/27/2014] [Accepted: 10/13/2014] [Indexed: 06/04/2023]
Abstract
Alternative functions of the left-hemisphere dominant Broca's region have induced hypotheses regarding the evolutionary parallels between manual praxis and language in humans. Many recent studies on Broca's area reveal several assumptions about the cognitive mechanisms that underlie both functions, including: (1) an accurate, finely controlled body schema, (2) increasing syntactical abilities, particularly for goal-oriented actions, and (3) bilaterality and fronto-parietal connectivity. Although these characteristics are supported by experimental paradigms, many researchers have failed to acknowledge a major line of evidence for the evolutionary development of these traits: stone tools. The neuroscience of stone tool manufacture is a viable proxy for understanding evolutionary aspects of manual praxis and language, and may provide key information for evaluating competing hypotheses on the co-evolution of these cognitive domains in our species.
Collapse
Affiliation(s)
- Lana Ruck
- Department of Anthropology, Florida Atlantic University, 777 Glades Rd., Boca Raton, FL, USA.
| |
Collapse
|
34
|
Peñaloza C, Benetello A, Tuomiranta L, Heikius IM, Järvinen S, Majos MC, Cardona P, Juncadella M, Laine M, Martin N, Rodríguez-Fornells A. Speech segmentation in aphasia. APHASIOLOGY 2014; 29:724-743. [PMID: 28824218 PMCID: PMC5560767 DOI: 10.1080/02687038.2014.982500] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
BACKGROUND Speech segmentation is one of the initial and mandatory phases of language learning. Although some people with aphasia have shown a preserved ability to learn novel words, their speech segmentation abilities have not been explored. AIMS We examined the ability of individuals with chronic aphasia to segment words from running speech via statistical learning. We also explored the relationships between speech segmentation and aphasia severity, and short-term memory capacity. We further examined the role of lesion location in speech segmentation and short-term memory performance. METHODS & PROCEDURES The experimental task was first validated with a group of young adults (n = 120). Participants with chronic aphasia (n = 14) were exposed to an artificial language and were evaluated in their ability to segment words using a speech segmentation test. Their performance was contrasted against chance level and compared to that of a group of elderly matched controls (n = 14) using group and case-by-case analyses. OUTCOMES & RESULTS As a group, participants with aphasia were significantly above chance level in their ability to segment words from the novel language and did not significantly differ from the group of elderly controls. Speech segmentation ability in the aphasic participants was not associated with aphasia severity although it significantly correlated with word pointing span, a measure of verbal short-term memory. Case-by-case analyses identified four individuals with aphasia who performed above chance level on the speech segmentation task, all with predominantly posterior lesions and mild fluent aphasia. Their short-term memory capacity was also better preserved than in the rest of the group. CONCLUSIONS Our findings indicate that speech segmentation via statistical learning can remain functional in people with chronic aphasia and suggest that this initial language learning mechanism is associated with the functionality of the verbal short-term memory system and the integrity of the left inferior frontal region.
Collapse
Affiliation(s)
- Claudia Peñaloza
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute – IDIBELL, Barcelona, Spain
| | - Annalisa Benetello
- Department of Communication Sciences and Disorders, Eleanor M. Saffran Center for Cognitive Neuroscience, Temple University, Philadelphia, PA, USA
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Leena Tuomiranta
- Department of Psychology and Logopedics, Abo Akademi University, Turku, Finland
| | - Ida-Maria Heikius
- Department of Psychology and Logopedics, Abo Akademi University, Turku, Finland
| | - Sonja Järvinen
- Department of Psychology and Logopedics, Abo Akademi University, Turku, Finland
| | - Maria Carmen Majos
- Hospital Universitari de Bellvitge (HUB), Rehabilitation Section, Campus Bellvitge, University of Barcelona, Barcelona, Spain
| | - Pedro Cardona
- Hospital Universitari de Bellvitge (HUB), Neurology Section, Campus Bellvitge, University of Barcelona, Barcelona, Spain
| | - Montserrat Juncadella
- Hospital Universitari de Bellvitge (HUB), Neurology Section, Campus Bellvitge, University of Barcelona, Barcelona, Spain
| | - Matti Laine
- Department of Psychology and Logopedics, Abo Akademi University, Turku, Finland
| | - Nadine Martin
- Department of Communication Sciences and Disorders, Eleanor M. Saffran Center for Cognitive Neuroscience, Temple University, Philadelphia, PA, USA
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute – IDIBELL, Barcelona, Spain
- Department of Basic Psychology, Campus Bellvitge, University of Barcelona, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, ICREA, Barcelona, Spain
| |
Collapse
|
35
|
tDCS to temporoparietal cortex during familiarisation enhances the subsequent phonological coherence of nonwords in immediate serial recall. Cortex 2014; 63:132-44. [PMID: 25282052 DOI: 10.1016/j.cortex.2014.08.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Revised: 06/09/2014] [Accepted: 08/27/2014] [Indexed: 11/23/2022]
Abstract
Research has shown that direct current stimulation (tDCS) over left temporoparietal cortex - a region implicated in phonological processing - aids new word learning. The locus of this effect remains unclear since (i) experiments have not empirically separated the acquisition of phonological forms from lexical-semantic links and (ii) outcome measures have focused on learnt associations with a referent rather than phonological stability. We tested the hypothesis that left temporoparietal tDCS would strengthen the acquisition of phonological forms, even in the absence of the opportunity to acquire lexical-semantic associations. Participants were familiarised with nonwords paired with (i) photographs of concrete referents or (ii) blurred images where no clear features were visible. Nonword familiarisation proceeded under conditions of anodal tDCS and sham stimulation in different sessions. We examined the impact of these manipulations on the stability of the phonological trace in an immediate serial recall (ISR) task the following day, ensuring that any effects were due to the influence of tDCS on long-term learning and not a direct consequence of short-term changes in neural excitability. We found that only a few exposures to the phonological forms of nonwords were sufficient to enhance nonword ISR overall compared to entirely novel items. Anodal tDCS during familiarisation further enhanced the acquisition of phonological forms, producing a specific reduction in the frequency of phoneme migrations when sequences of nonwords were maintained in verbal short-term memory. More of the phonemes that were recalled were bound together as a whole correct nonword following tDCS. These data show that tDCS to left temporoparietal cortex can facilitate word learning by strengthening the acquisition of long-term phonological forms, irrespective of the availability of a concrete referent, and that the consequences of this learning can be seen beyond the learning task as strengthened phonological coherence in verbal short-term memory.
Collapse
|
36
|
Kort NS, Nagarajan SS, Houde JF. A bilateral cortical network responds to pitch perturbations in speech feedback. Neuroimage 2013; 86:525-35. [PMID: 24076223 DOI: 10.1016/j.neuroimage.2013.09.042] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2013] [Revised: 09/05/2013] [Accepted: 09/15/2013] [Indexed: 10/26/2022] Open
Abstract
Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors.
Collapse
Affiliation(s)
- Naomi S Kort
- Department of Radiology, University of California, San Francisco, and University of California, Berkeley USA; Joint Graduate Group in Bioengineering, University of California, San Francisco, USA.
| | - Srikantan S Nagarajan
- Department of Radiology, University of California, San Francisco, and University of California, Berkeley USA.
| | - John F Houde
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, USA.
| |
Collapse
|
37
|
Chobert J, Besson M. Musical expertise and second language learning. Brain Sci 2013; 3:923-40. [PMID: 24961431 PMCID: PMC4061852 DOI: 10.3390/brainsci3020923] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2013] [Revised: 05/23/2013] [Accepted: 05/24/2013] [Indexed: 11/16/2022] Open
Abstract
Increasing evidence suggests that musical expertise influences brain organization and brain functions. Moreover, results at the behavioral and neurophysiological levels reveal that musical expertise positively influences several aspects of speech processing, from auditory perception to speech production. In this review, we focus on the main results of the literature that led to the idea that musical expertise may benefit second language acquisition. We discuss several interpretations that may account for the influence of musical expertise on speech processing in native and foreign languages, and we propose new directions for future research.
Collapse
Affiliation(s)
- Julie Chobert
- Laboratoire de Neurosciences Cognitives, CNRS-Aix-Marseille Université, 3 place Victor Hugo, 13331 Marseille Cedex 3, France.
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives, CNRS-Aix-Marseille Université, 3 place Victor Hugo, 13331 Marseille Cedex 3, France.
| |
Collapse
|
38
|
Syntax in a pianist's hand: ERP signatures of “embodied” syntax processing in music. Cortex 2013; 49:1325-39. [DOI: 10.1016/j.cortex.2012.06.007] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2011] [Revised: 04/02/2012] [Accepted: 06/13/2012] [Indexed: 11/19/2022]
|
39
|
Schapiro AC, Rogers TT, Cordova NI, Turk-Browne NB, Botvinick MM. Neural representations of events arise from temporal community structure. Nat Neurosci 2013; 16:486-92. [PMID: 23416451 PMCID: PMC3749823 DOI: 10.1038/nn.3331] [Citation(s) in RCA: 225] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2012] [Accepted: 01/08/2013] [Indexed: 11/09/2022]
Abstract
Our experience of the world seems to divide naturally into discrete, temporally extended events, yet the mechanisms underlying the learning and identification of events are poorly understood. Research on event perception has focused on transient elevations in predictive uncertainty or surprise as the primary signal driving event segmentation. We present human behavioral and functional magnetic resonance imaging (fMRI) evidence in favor of a different account, in which event representations coalesce around clusters or 'communities' of mutually predicting stimuli. Through parsing behavior, fMRI adaptation and multivoxel pattern analysis, we demonstrate the emergence of event representations in a domain containing such community structure, but in which transition probabilities (the basis of uncertainty and surprise) are uniform. We present a computational account of how the relevant representations might arise, proposing a direct connection between event learning and the learning of semantic categories.
Collapse
Affiliation(s)
- Anna C Schapiro
- Department of Psychology, Princeton University, Princeton, New Jersey, USA.
| | | | | | | | | |
Collapse
|
40
|
Herman AB, Houde JF, Vinogradov S, Nagarajan SS. Parsing the phonological loop: activation timing in the dorsal speech stream determines accuracy in speech reproduction. J Neurosci 2013; 33:5439-53. [PMID: 23536060 PMCID: PMC3711632 DOI: 10.1523/jneurosci.1472-12.2013] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2012] [Revised: 12/07/2012] [Accepted: 12/18/2012] [Indexed: 11/21/2022] Open
Abstract
Despite significant research and important clinical correlates, direct neural evidence for a phonological loop linking speech perception, short-term memory and production remains elusive. To investigate these processes, we acquired whole-head magnetoencephalographic (MEG) recordings from human subjects performing a variable-length syllable sequence reproduction task. The MEG sensor data were source localized using a time-frequency optimized spatially adaptive filter, and we examined the time courses of cortical oscillatory power and the correlations of oscillatory power with behavior between onset of the audio stimulus and the overt speech response. We found dissociations between time courses of behaviorally relevant activations in a network of regions falling primarily within the dorsal speech stream. In particular, verbal working memory load modulated high gamma power in both Sylvian-parietal-temporal and Broca's areas. The time courses of the correlations between high gamma power and subject performance clearly alternated between these two regions throughout the task. Our results provide the first evidence of a reverberating input-output buffer system in the dorsal stream underlying speech sensorimotor integration, consistent with recent phonological loop, competitive queuing, and speech-motor control models. These findings also shed new light on potential sources of speech dysfunction in aphasia and neuropsychiatric disorders, identifying anatomically and behaviorally dissociable activation time windows critical for successful speech reproduction.
Collapse
Affiliation(s)
- Alexander B. Herman
- Biomagnetic Imaging Laboratory, Department of Radiology and Biomedical Imaging, and
| | - John F. Houde
- Departments of Otolaryngology–Head and Neck Surgery and
| | - Sophia Vinogradov
- Psychiatry, University of California, San Francisco, San Francisco, California 94143
| | | |
Collapse
|
41
|
van Ermingen-Marbach M, Grande M, Pape-Neumann J, Sass K, Heim S. Distinct neural signatures of cognitive subtypes of dyslexia with and without phonological deficits. NEUROIMAGE-CLINICAL 2013; 2:477-90. [PMID: 24936406 PMCID: PMC4054964 DOI: 10.1016/j.nicl.2013.03.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2012] [Revised: 03/08/2013] [Accepted: 03/16/2013] [Indexed: 01/23/2023]
Abstract
Developmental dyslexia can be distinguished as different cognitive subtypes with and without phonological deficits. However, despite some general agreement on the neurobiological basis of dyslexia, the neurofunctional mechanisms underlying these cognitive subtypes remain to be identified. The present BOLD fMRI study thus aimed at investigating by which distinct and/or shared neural activation patterns dyslexia subtypes are characterized. German dyslexic fourth graders with and without deficits in phonological awareness and age-matched normal readers performed a phonological decision task: does the auditory word contain the phoneme/a/? Both dyslexic subtypes showed increased activation in the right cerebellum (Lobule IV) compared to controls. Subtype-specific increased activation was systematically found for the phonological dyslexics as compared to those without this deficit and controls in the left inferior frontal gyrus (area 44: phonological segmentation), the left SMA (area 6), the left precentral gyrus (area 6) and the right insula. Non-phonological dyslexics revealed subtype-specific increased activation in the left supramarginal gyrus (area PFcm; phonological storage) and angular gyrus (area PGp). The study thus provides the first direct evidence for the neurobiological grounding of dyslexia subtypes. Moreover, the data contribute to a better understanding of the frequently encountered heterogeneous neuroimaging results in the field of dyslexia.
Collapse
Affiliation(s)
- Muna van Ermingen-Marbach
- Section Structural-Functional Brain Mapping, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Germany ; JARA-Translational Brain Medicine, Germany
| | - Marion Grande
- Section Neurological Cognition Research, Department of Neurology, Medical School, RWTH Aachen University, Germany
| | - Julia Pape-Neumann
- Section Neurological Cognition Research, Department of Neurology, Medical School, RWTH Aachen University, Germany
| | - Katharina Sass
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Germany ; JARA-Translational Brain Medicine, Germany
| | - Stefan Heim
- Section Structural-Functional Brain Mapping, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Germany ; Section Neurological Cognition Research, Department of Neurology, Medical School, RWTH Aachen University, Germany ; Research Centre Jülich, Institute of Neuroscience and Medicine (INM-1), Germany ; JARA-Translational Brain Medicine, Germany
| |
Collapse
|
42
|
Archila-Suerte P, Zevin J, Ramos AI, Hernandez AE. The neural basis of non-native speech perception in bilingual children. Neuroimage 2013; 67:51-63. [PMID: 23123633 PMCID: PMC5942220 DOI: 10.1016/j.neuroimage.2012.10.023] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2012] [Revised: 09/17/2012] [Accepted: 10/15/2012] [Indexed: 10/27/2022] Open
Abstract
The goal of the present study is to reveal how the neural mechanisms underlying non-native speech perception change throughout childhood. In a pre-attentive listening fMRI task, English monolingual and Spanish-English bilingual children - divided into groups of younger (6-8yrs) and older children (9-10yrs) - were asked to watch a silent movie while several English syllable combinations played through a pair of headphones. Two additional groups of monolingual and bilingual adults were included in the analyses. Our results show that the neural mechanisms supporting speech perception throughout development differ in monolinguals and bilinguals. While monolinguals recruit perceptual areas (i.e., superior temporal gyrus) in early and late childhood to process native speech, bilinguals recruit perceptual areas (i.e., superior temporal gyrus) in early childhood and higher-order executive areas in late childhood (i.e., bilateral middle frontal gyrus and bilateral inferior parietal lobule, among others) to process non-native speech. The findings support the Perceptual Assimilation Model and the Speech Learning Model and suggest that the neural system processes phonological information differently depending on the stage of L2 speech learning.
Collapse
|
43
|
Rauschecker JP. Processing Streams in Auditory Cortex. NEURAL CORRELATES OF AUDITORY COGNITION 2013. [DOI: 10.1007/978-1-4614-2350-8_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
44
|
Chobert J, François C, Velay JL, Besson M. Twelve Months of Active Musical Training in 8- to 10-Year-Old Children Enhances the Preattentive Processing of Syllabic Duration and Voice Onset Time. Cereb Cortex 2012; 24:956-67. [PMID: 23236208 DOI: 10.1093/cercor/bhs377] [Citation(s) in RCA: 140] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Julie Chobert
- Laboratoire de Neurosciences Cognitives, CNRS - Aix-Marseille Université, Marseille Cedex 3, France
| | | | | | | |
Collapse
|
45
|
Thothathiri M, Gagliardi M, Schwartz MF. Subdivision of frontal cortex mechanisms for language production in aphasia. Neuropsychologia 2012; 50:3284-94. [PMID: 23022077 DOI: 10.1016/j.neuropsychologia.2012.09.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2011] [Revised: 08/20/2012] [Accepted: 09/11/2012] [Indexed: 11/26/2022]
Abstract
Ventrolateral prefrontal cortex (VLPFC) has long been linked to language production, but the precise mechanisms are still being elucidated. Using neuropsychological case studies, we explored possible sub-specialization within this region for different linguistic and executive functions. Frontal patients with different lesion profiles completed two sequencing tasks, which were hypothesized to engage partially overlapping components. The multi-word priming task tested the sequencing of co-activated representations and the overriding of primed word orders. The sequence reproduction task tested the sequencing of co-activated representations, but did not employ a priming manipulation. We compared patients' performance on the two tasks to that of healthy, age-matched controls. Results are partially consistent with an anterior-posterior gradient of cognitive control within lateral prefrontal cortex (Koechlin & Summerfield, 2007). However, we also found a stimulus-specific pattern, which suggests that sub-specialization might be contingent on type of representation as well as type of control signal. Isolating such components functionally and anatomically might lead to a better understanding of language production deficits in aphasia.
Collapse
|
46
|
Sammler D, Koelsch S, Ball T, Brandt A, Grigutsch M, Huppertz HJ, Knösche TR, Wellmer J, Widman G, Elger CE, Friederici AD, Schulze-Bonhage A. Co-localizing linguistic and musical syntax with intracranial EEG. Neuroimage 2012; 64:134-46. [PMID: 23000255 DOI: 10.1016/j.neuroimage.2012.09.035] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2012] [Revised: 09/05/2012] [Accepted: 09/13/2012] [Indexed: 10/27/2022] Open
Abstract
Despite general agreement on shared syntactic resources in music and language, the neuroanatomical underpinnings of this overlap remain largely unexplored. While previous studies mainly considered frontal areas as supramodal grammar processors, the domain-general syntactic role of temporal areas has been so far neglected. Here we capitalized on the excellent spatial and temporal resolution of subdural EEG recordings to co-localize low-level syntactic processes in music and language in the temporal lobe in a within-subject design. We used Brain Surface Current Density mapping to localize and compare neural generators of the early negativities evoked by violations of phrase structure grammar in both music and spoken language. The results show that the processing of syntactic violations relies in both domains on bilateral temporo-fronto-parietal neural networks. We found considerable overlap of these networks in the superior temporal lobe, but also differences in the hemispheric timing and relative weighting of their fronto-temporal constituents. While alluding to the dissimilarity in how shared neural resources may be configured depending on the musical or linguistic nature of the perceived stimulus, the combined data lend support for a co-localization of early musical and linguistic syntax processing in the temporal lobe.
Collapse
Affiliation(s)
- Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
47
|
Tillmann B. Music and language perception: expectations, structural integration, and cognitive sequencing. Top Cogn Sci 2012; 4:568-84. [PMID: 22760955 DOI: 10.1111/j.1756-8765.2012.01209.x] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes.
Collapse
Affiliation(s)
- Barbara Tillmann
- Lyon Neuroscience Research Center - CRNL, CNRS UMR5292, INSERM U1028, Université Lyon 1, Lyon Cedex.
| |
Collapse
|
48
|
Integration of faces and vocalizations in ventral prefrontal cortex: implications for the evolution of audiovisual speech. Proc Natl Acad Sci U S A 2012; 109 Suppl 1:10717-24. [PMID: 22723356 DOI: 10.1073/pnas.1204335109] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The integration of facial gestures and vocal signals is an essential process in human communication and relies on an interconnected circuit of brain regions, including language regions in the inferior frontal gyrus (IFG). Studies have determined that ventral prefrontal cortical regions in macaques [e.g., the ventrolateral prefrontal cortex (VLPFC)] share similar cytoarchitectonic features as cortical areas in the human IFG, suggesting structural homology. Anterograde and retrograde tracing studies show that macaque VLPFC receives afferents from the superior and inferior temporal gyrus, which provide complex auditory and visual information, respectively. Moreover, physiological studies have shown that single neurons in VLPFC integrate species-specific face and vocal stimuli. Although bimodal responses may be found across a wide region of prefrontal cortex, vocalization responsive cells, which also respond to faces, are mainly found in anterior VLPFC. This suggests that VLPFC may be specialized to process and integrate social communication information, just as the IFG is specialized to process and integrate speech and gestures in the human brain.
Collapse
|
49
|
Abrams DA, Ryali S, Chen T, Balaban E, Levitin DJ, Menon V. Multivariate activation and connectivity patterns discriminate speech intelligibility in Wernicke's, Broca's, and Geschwind's areas. Cereb Cortex 2012; 23:1703-14. [PMID: 22693339 DOI: 10.1093/cercor/bhs165] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The brain network underlying speech comprehension is usually described as encompassing fronto-temporal-parietal regions while neuroimaging studies of speech intelligibility have focused on a more spatially restricted network dominated by the superior temporal cortex. Here we use functional magnetic resonance imaging with a novel whole-brain multivariate pattern analysis (MVPA) to more fully characterize neural responses and connectivity to intelligible speech. Consistent with previous univariate findings, intelligible speech elicited greater activity in bilateral superior temporal cortex relative to unintelligible speech. However, MVPA identified a more extensive network that discriminated between intelligible and unintelligible speech, including left-hemisphere middle temporal gyrus, angular gyrus, inferior temporal cortex, and inferior frontal gyrus pars triangularis. These fronto-temporal-parietal areas also showed greater functional connectivity during intelligible, compared with unintelligible, speech. Our results suggest that speech intelligibly is encoded by distinct fine-grained spatial representations and within-task connectivity, rather than differential engagement or disengagement of brain regions, and they provide a more complete view of the brain network serving speech comprehension. Our findings bridge a divide between neural models of speech comprehension and the neuroimaging literature on speech intelligibility, and suggest that speech intelligibility relies on differential multivariate response and connectivity patterns in Wernicke's, Broca's, and Geschwind's areas.
Collapse
Affiliation(s)
- Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
| | | | | | | | | | | |
Collapse
|
50
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1298] [Impact Index Per Article: 108.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|