1
|
Dole M, Vilain C, Haldin C, Baciu M, Cousin E, Lamalle L, Lœvenbruck H, Vilain A, Schwartz JL. Comparing the selectivity of vowel representations in cortical auditory vs. motor areas: A repetition-suppression study. Neuropsychologia 2022; 176:108392. [DOI: 10.1016/j.neuropsychologia.2022.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
2
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
3
|
Niemczak CE, Lichtenstein JD, Magohe A, Amato JT, Fellows AM, Gui J, Huang M, Rieke CC, Massawe ER, Boivin MJ, Moshi N, Buckey JC. The Relationship Between Central Auditory Tests and Neurocognitive Domains in Adults Living With HIV. Front Neurosci 2021; 15:696513. [PMID: 34658754 PMCID: PMC8517794 DOI: 10.3389/fnins.2021.696513] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 09/07/2021] [Indexed: 12/21/2022] Open
Abstract
Objective: Tests requiring central auditory processing, such as speech perception-in-noise, are simple, time efficient, and correlate with cognitive processing. These tests may be useful for tracking brain function. Doing this effectively requires information on which tests correlate with overall cognitive function and specific cognitive domains. This study evaluated the relationship between selected central auditory focused tests and cognitive domains in a cohort of normal hearing adults living with HIV and HIV- controls. The long-term aim is determining the relationships between auditory processing and neurocognitive domains and applying this to analyzing cognitive function in HIV and other neurocognitive disorders longitudinally. Method: Subjects were recruited from an ongoing study in Dar es Salaam, Tanzania. Central auditory measures included the Gap Detection Test (Gap), Hearing in Noise Test (HINT), and Triple Digit Test (TDT). Cognitive measures included variables from the Test of Variables of Attention (TOVA), Cogstate neurocognitive battery, and Kiswahili Montreal Cognitive Assessment (MoCA). The measures represented three cognitive domains: processing speed, learning, and working memory. Bootstrap resampling was used to calculate the mean and standard deviation of the proportion of variance explained by the individual central auditory tests for each cognitive measure. The association of cognitive measures with central auditory variables taking HIV status and age into account was determined using regression models. Results: Hearing in Noise Tests and TDT were significantly associated with Cogstate learning and working memory tests. Gap was not significantly associated with any cognitive measure with age in the model. TDT explained the largest mean proportion of variance and had the strongest relationship to the MoCA and Cogstate tasks. With age in the model, HIV status did not affect the relationship between central auditory tests and cognitive measures. Age was strongly associated with multiple cognitive tests. Conclusion: Central auditory tests were associated with measures of learning and working memory. Compared to the other central auditory tests, TDT was most strongly related to cognitive function. These findings expand on the association between auditory processing and cognitive domains seen in other studies and support evaluating these tests for tracking brain health in HIV and other neurocognitive disorders.
Collapse
Affiliation(s)
- Christopher E. Niemczak
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Jonathan D. Lichtenstein
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
- Department of Psychiatry, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Albert Magohe
- Department of Otorhinolaryngology, Muhimibili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Jennifer T. Amato
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
- Department of Psychiatry, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Abigail M. Fellows
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Jiang Gui
- Department of Data Science, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Michael Huang
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Catherine C. Rieke
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
| | - Enica R. Massawe
- Department of Otorhinolaryngology, Muhimibili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Michael J. Boivin
- Department of Psychiatry, Michigan State University, East Lansing, MI, United States
| | - Ndeserua Moshi
- Department of Otorhinolaryngology, Muhimibili University of Health and Allied Sciences, Dar es Salaam, Tanzania
| | - Jay C. Buckey
- Space Medicine Innovations Laboratory, Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, United States
| |
Collapse
|
4
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18. [PMID: 33690177 DOI: 10.1101/2020.08.03.234997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 05/24/2023]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
5
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
6
|
Kuiper JJ, Lin YH, Young IM, Bai MY, Briggs RG, Tanglay O, Fonseka RD, Hormovas J, Dhanaraj V, Conner AK, O'Neal CM, Sughrue ME. A parcellation-based model of the auditory network. Hear Res 2020; 396:108078. [PMID: 32961519 DOI: 10.1016/j.heares.2020.108078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/01/2020] [Accepted: 09/11/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION The auditory network plays an important role in interaction with the environment. Multiple cortical areas, such as the inferior frontal gyrus, superior temporal gyrus and adjacent insula have been implicated in this processing. However, understanding of this network's connectivity has been devoid of tractography specificity. METHODS Using attention task-based functional magnetic resonance imaging (MRI) studies, an activation likelihood estimation (ALE) of the auditory network was generated. Regions of interest corresponding to the cortical parcellation scheme previously published under the Human Connectome Project were co-registered onto the ALE in the Montreal Neurological Institute coordinate space, and visually assessed for inclusion in the network. Diffusion spectrum MRI-based fiber tractography was performed to determine the structural connections between cortical parcellations comprising the network. RESULTS Fifteen cortical regions were found to be part of the auditory network: areas 44 and 8C, auditory area 1, 4, and 5, frontal operculum area 4, the lateral belt, medial belt and parabelt, parietal area F centromedian, perisylvian language area, retroinsular cortex, supplementary and cingulate eye field and the temporoparietal junction area 1. These regions showed consistent interconnections between adjacent parcellations. The frontal aslant tract was found to connect areas within the frontal lobe, while the arcuate fasciculus was found to connect the frontal and temporal lobe, and subcortical U-fibers were found to connect parcellations within the temporal area. Further studies may refine this model with the ultimate goal of clinical application.
Collapse
Affiliation(s)
- Joseph J Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Yueh-Hsin Lin
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | | | - Michael Y Bai
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Onur Tanglay
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - R Dineth Fonseka
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Jorge Hormovas
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Vukshitha Dhanaraj
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Christen M O'Neal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Michael E Sughrue
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia.
| |
Collapse
|
7
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Neural correlates of intonation and lexical tone in tonal and non-tonal language speakers. Hum Brain Mapp 2020; 41:1842-1858. [PMID: 31957928 PMCID: PMC7268089 DOI: 10.1002/hbm.24916] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 12/10/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
Intonation, the modulation of pitch in speech, is a crucial aspect of language that is processed in right‐hemispheric regions, beyond the classical left‐hemispheric language system. Whether or not this notion generalises across languages remains, however, unclear. Particularly, tonal languages are an interesting test case because of the dual linguistic function of pitch that conveys lexical meaning in form of tone, in addition to intonation. To date, only few studies have explored how intonation is processed in tonal languages, how this compares to tone and between tonal and non‐tonal language speakers. The present fMRI study addressed these questions by testing Mandarin and German speakers with Mandarin material. Both groups categorised mono‐syllabic Mandarin words in terms of intonation, tone, and voice gender. Systematic comparisons of brain activity of the two groups between the three tasks showed large cross‐linguistic commonalities in the neural processing of intonation in left fronto‐parietal, right frontal, and bilateral cingulo‐opercular regions. These areas are associated with general phonological, specific prosodic, and controlled categorical decision‐making processes, respectively. Tone processing overlapped with intonation processing in left fronto‐parietal areas, in both groups, but evoked additional activity in bilateral temporo‐parietal semantic regions and subcortical areas in Mandarin speakers only. Together, these findings confirm cross‐linguistic commonalities in the neural implementation of intonation processing but dissociations for semantic processing of tone only in tonal language speakers.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
8
|
Interaction of the effects associated with auditory-motor integration and attention-engaging listening tasks. Neuropsychologia 2019; 124:322-336. [PMID: 30444980 DOI: 10.1016/j.neuropsychologia.2018.11.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 11/08/2018] [Indexed: 11/22/2022]
Abstract
A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.
Collapse
|
9
|
Briggs RG, Pryor DP, Conner AK, Nix CE, Milton CK, Kuiper JK, Palejwala AH, Sughrue ME. The Artery of Aphasia, A Uniquely Sensitive Posterior Temporal Middle Cerebral Artery Branch that Supplies Language Areas in the Brain: Anatomy and Report of Four Cases. World Neurosurg 2019; 126:e65-e76. [PMID: 30735868 DOI: 10.1016/j.wneu.2019.01.159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 01/14/2019] [Accepted: 01/17/2019] [Indexed: 10/27/2022]
Abstract
BACKGROUND Arterial disruption during brain surgery can cause devastating injuries to wide expanses of white and gray matter beyond the tumor resection cavity. Such damage may occur as a result of disrupting blood flow through en passage arteries. Identification of these arteries is critical to prevent unforeseen neurologic sequelae during brain tumor resection. In this study, we discuss one such artery, termed the artery of aphasia (AoA), which when disrupted can lead to receptive and expressive language deficits. METHODS We performed a retrospective review of all patients undergoing an awake craniotomy for resection of a glioma by the senior author from 2012 to 2018. Patients were included if they experienced language deficits secondary to postoperative infarction in the left posterior temporal lobe in the distribution of the AoA. The gross anatomy of the AoA was then compared with activation likelihood estimations of the auditory and semantic language networks using coordinate-based meta-analytic techniques. RESULTS We identified 4 patients with left-sided posterior temporal artery infarctions in the distribution of the AoA on diffusion-weighted magnetic resonance imaging. All 4 patients developed substantial expressive and receptive language deficits after surgery. Functional language improvement occurred in only 2/4 patients. Activation likelihood estimations localized parts of the auditory and semantic language networks in the distribution of the AoA. CONCLUSIONS The AoA is prone to blood flow disruption despite benign manipulation. Patients seem to have limited capacity for speech recovery after intraoperative ischemia in the distribution of this artery, which supplies parts of the auditory and semantic language networks.
Collapse
Affiliation(s)
- Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Dillon P Pryor
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Cameron E Nix
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Camille K Milton
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Joseph K Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Ali H Palejwala
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, USA
| | - Michael E Sughrue
- Department of Neurosurgery, Prince of Wales Private Hospital, Sydney, Australia.
| |
Collapse
|
10
|
Smith E, Junger J, Pauly K, Kellermann T, Neulen J, Neuschaefer-Rube C, Derntl B, Habel U. Gender incongruence and the brain - Behavioral and neural correlates of voice gender perception in transgender people. Horm Behav 2018; 105:11-21. [PMID: 29981752 DOI: 10.1016/j.yhbeh.2018.07.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Revised: 06/11/2018] [Accepted: 07/02/2018] [Indexed: 10/28/2022]
Abstract
The phenomenon of gender incongruence is hypothesized to arise from a discrepant sexual development of the brain and the genitals, contingent on genetic and hormonal mechanisms. We aimed at visualizing transgender identity on a neurobiological level, assuming a higher functional similarity to individuals of the aspired rather than assigned sex. Implementing a gender perception paradigm featuring male and female voice stimuli, behavioral and functional imaging data of transmen were compared to men and women, and to transwomen, respectively. Men had decreased activation in response to voices of the other sex in regions across the frontoparietal and insular cortex, while the activation patterns of women and transmen were characterized by little or no differentiation between male and female voices. Further, transmen had a comparatively high discrimination performance for ambiguous male voices, possibly reflecting a high sensitivity for voices of the aspired sex. Comparing transmen and transwomen yielded only few differences in the processing of male compared to female voices. In the insula, we observed a pattern similar to that of men and women, the neural responses of the transgender group being in accordance with their gender identity rather than assigned sex. Notwithstanding the similarities found dependent on biological sex, the findings support the hypothesis of gender incongruence being a condition in which neural processing modes are partly incongruent with one's assigned sex.
Collapse
Affiliation(s)
- Elke Smith
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany.
| | - Jessica Junger
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA-BRAIN Institute, Brain Structure-Function Relationships, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
| | - Katharina Pauly
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA-BRAIN Institute, Brain Structure-Function Relationships, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA-BRAIN Institute, Brain Structure-Function Relationships, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
| | - Joseph Neulen
- Department of Gynaecological Endocrinology and Reproductive Medicine, Medical School, RWTH Aachen University, Aachen, Germany
| | - Christiane Neuschaefer-Rube
- Department of Phoniatrics, Pedaudiology and Communication Disorders, Medical School, RWTH Aachen University, Aachen, Germany
| | - Birgit Derntl
- Department of Psychiatry and Psychotherapy, University Hospital Tübingen, Tübingen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany; JARA-BRAIN Institute, Brain Structure-Function Relationships, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
| |
Collapse
|
11
|
Neural correlates of dystonic tremor: a multimodal study of voice tremor in spasmodic dysphonia. Brain Imaging Behav 2018; 11:166-175. [PMID: 26843004 DOI: 10.1007/s11682-016-9513-x] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Tremor, affecting a dystonic body part, is a frequent feature of adult-onset dystonia. However, our understanding of dystonic tremor pathophysiology remains ambiguous as its interplay with the main co-occurring disorder, dystonia, is largely unknown. We used a combination of functional MRI, voxel-based morphometry and diffusion-weighted imaging to investigate similar and distinct patterns of brain functional and structural alterations in patients with dystonic tremor of voice (DTv) and isolated spasmodic dysphonia (SD). We found that, compared to controls, SD patients with and without DTv showed similarly increased activation in the sensorimotor cortex, inferior frontal (IFG) and superior temporal gyri, putamen and ventral thalamus, as well as deficient activation in the inferior parietal cortex and middle frontal gyrus (MFG). Common structural alterations were observed in the IFG and putamen, which were further coupled with functional abnormalities in both patient groups. Abnormal activation in left putamen was correlated with SD onset; SD/DTv onset was associated with right putaminal volumetric changes. DTv severity established a significant relationship with abnormal volume of the left IFG. Direct patient group comparisons showed that SD/DTv patients had additional abnormalities in MFG and cerebellar function and white matter integrity in the posterior limb of the internal capsule. Our findings suggest that dystonia and dystonic tremor, at least in the case of SD and SD/DTv, are heterogeneous disorders at different ends of the same pathophysiological spectrum, with each disorder carrying a characteristic neural signature, which may potentially help development of differential markers for these two conditions.
Collapse
|
12
|
Clemens B, Junger J, Pauly K, Neulen J, Neuschaefer-Rube C, Frölich D, Mingoia G, Derntl B, Habel U. Male-to-female gender dysphoria: Gender-specific differences in resting-state networks. Brain Behav 2017; 7:e00691. [PMID: 28523232 PMCID: PMC5434195 DOI: 10.1002/brb3.691] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 02/02/2017] [Accepted: 02/23/2017] [Indexed: 12/28/2022] Open
Abstract
INTRODUCTION Recent research found gender-related differences in resting-state functional connectivity (rs-FC) measured by functional magnetic resonance imaging (fMRI). To the best of our knowledge, there are no studies examining the differences in rs-FC between men, women, and individuals who report a discrepancy between their anatomical sex and their gender identity, i.e. gender dysphoria (GD). METHODS To address this important issue, we present the first fMRI study systematically investigating the differences in typical resting-state networks (RSNs) and hormonal treatment effects in 26 male-to-female GD individuals (MtFs) compared with 19 men and 20 women. RESULTS Differences between male and female control groups were found only in the auditory RSN, whereas differences between both control groups and MtFs were found in the auditory and fronto-parietal RSNs, including both primary sensory areas (e.g. calcarine gyrus) and higher order cognitive areas such as the middle and posterior cingulate and dorsomedial prefrontal cortex. Overall, differences in MtFs compared with men and women were more pronounced before cross-sex hormonal treatment. Interestingly, rs-FC between MtFs and women did not differ significantly after treatment. When comparing hormonally untreated and treated MtFs, we found differences in connectivity of the calcarine gyrus and thalamus in the context of the auditory network, as well as the inferior frontal gyrus in context of the fronto-parietal network. CONCLUSION Our results provide first evidence that MtFs exhibit patterns of rs-FC which are different from both their assigned and their aspired gender, indicating an intermediate position between the two sexes. We suggest that the present study constitutes a starting point for future research designed to clarify whether the brains of individuals with GD are more similar to their assigned or their aspired gender.
Collapse
Affiliation(s)
- Benjamin Clemens
- Department of Psychiatry, Psychotherapy and Psychosomatics Medical School RWTH Aachen University Aachen Germany
| | - Jessica Junger
- Department of Psychiatry, Psychotherapy and Psychosomatics Medical School RWTH Aachen University Aachen Germany.,JARA-Translational Brain MedicineJülichGermany
| | - Katharina Pauly
- Department of Psychiatry, Psychotherapy and Psychosomatics Medical School RWTH Aachen University Aachen Germany.,JARA-Translational Brain MedicineJülichGermany
| | - Josef Neulen
- Department of Gynecological Endocrinology and Reproductive Medicine Medical School RWTH Aachen University Aachen Germany
| | - Christiane Neuschaefer-Rube
- Department of Phoniatrics, Pedaudiology and Communication Disorders Medical School RWTH Aachen University Aachen Germany
| | - Dirk Frölich
- Department of Phoniatrics, Pedaudiology and Communication Disorders Medical School RWTH Aachen University Aachen Germany
| | - Gianluca Mingoia
- JARA-Translational Brain MedicineJülichGermany.,Interdisciplinary Center for Clinical Research (IZKF) RWTH Aachen University Aachen Germany
| | - Birgit Derntl
- Department of Psychiatry and Psychotherapy University of Tübingen Tübingen Germany.,Werner Reichardt Center for Integrative Neuroscience (CIN) University of Tübingen Tübingen Germany.,LEAD Graduate School and Research Network Tübingen Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics Medical School RWTH Aachen University Aachen Germany.,JARA-Translational Brain MedicineJülichGermany
| |
Collapse
|
13
|
Measuring speaker-listener neural coupling with functional near infrared spectroscopy. Sci Rep 2017; 7:43293. [PMID: 28240295 PMCID: PMC5327440 DOI: 10.1038/srep43293] [Citation(s) in RCA: 96] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Accepted: 01/20/2017] [Indexed: 11/30/2022] Open
Abstract
The present study investigates brain-to-brain coupling, defined as inter-subject correlations in the hemodynamic response, during natural verbal communication. We used functional near-infrared spectroscopy (fNIRS) to record brain activity of 3 speakers telling stories and 15 listeners comprehending audio recordings of these stories. Listeners’ brain activity was significantly correlated with speakers’ with a delay. This between-brain correlation disappeared when verbal communication failed. We further compared the fNIRS and functional Magnetic Resonance Imaging (fMRI) recordings of listeners comprehending the same story and found a significant relationship between the fNIRS oxygenated-hemoglobin concentration changes and the fMRI BOLD in brain areas associated with speech comprehension. This correlation between fNIRS and fMRI was only present when data from the same story were compared between the two modalities and vanished when data from different stories were compared; this cross-modality consistency further highlights the reliability of the spatiotemporal brain activation pattern as a measure of story comprehension. Our findings suggest that fNIRS can be used for investigating brain-to-brain coupling during verbal communication in natural settings.
Collapse
|
14
|
Chang CHC, Kuo WJ. The Neural Substrates Underlying the Implementation of Phonological Rule in Lexical Tone Production: An fMRI Study of the Tone 3 Sandhi Phenomenon in Mandarin Chinese. PLoS One 2016; 11:e0159835. [PMID: 27455078 PMCID: PMC4959711 DOI: 10.1371/journal.pone.0159835] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 07/08/2016] [Indexed: 11/19/2022] Open
Abstract
This study examined the neural substrates underlying the implementation of phonological rule in lexical tone by the Tone 3 sandhi phenomenon in Mandarin Chinese. Tone 3 sandhi is traditionally described as the substitution of Tone 3 with Tone 2 when followed by another Tone 3 (33 →23) during speech production. Tone 3 sandhi enables the examination of tone processing in the phonological level with the least involvement of segments. Using the fMRI technique, we measured brain activations corresponding to the monosyllable and disyllable sequences of the four Chinese lexical tones, while manipulating the requirement on overt oral response. The application of Tone 3 sandhi to disyllable sequence of Tone 3 was confirmed by our behavioral results. Larger brain responses to overtly produced disyllable Tone 3 (33 > 11, 22, and 44) were found in right posterior IFG by both whole-brain and ROI analyses. We suggest that the right IFG was responsible for the processing of Tone 3 sandhi. Intense temporo-frontal interaction is needed in speech production for self-monitoring. The involvement of the right IFG in tone production might result from its interaction with the right auditory cortex, which is known to specialize in pitch. Future studies using tools with better temporal resolutions are needed to illuminate the dynamic interaction between the right inferior frontal regions and the left-lateralized language network in tone languages.
Collapse
Affiliation(s)
- Claire H. C. Chang
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- College of Humanities and Social Sciences, Taipei Medical University, Taipei, Taiwan
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
- * E-mail:
| |
Collapse
|
15
|
Husain FT. Neural networks of tinnitus in humans: Elucidating severity and habituation. Hear Res 2016; 334:37-48. [DOI: 10.1016/j.heares.2015.09.010] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Revised: 09/19/2015] [Accepted: 09/22/2015] [Indexed: 02/06/2023]
|
16
|
Scharinger M, Bendixen A, Herrmann B, Henry MJ, Mildner T, Obleser J. Predictions interact with missing sensory evidence in semantic processing areas. Hum Brain Mapp 2015; 37:704-16. [PMID: 26583355 DOI: 10.1002/hbm.23060] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 11/06/2015] [Accepted: 11/08/2015] [Indexed: 11/07/2022] Open
Abstract
Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Alexandra Bendixen
- Department of Physics, School of Natural Sciences, Chemnitz University of Technology, Chemnitz, Germany
| | - Björn Herrmann
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Molly J Henry
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Toralf Mildner
- Nuclear Magnetic Resonance Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
17
|
Abstract
In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI) data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation of the functional speech connectome. In addition, the observed capacity of the primary sensorimotor cortex to exhibit operational heterogeneity challenged the established concept of unimodality of this region. This study uses graph theory to analyze functional MRI data recorded from speakers as they produce single syllables or whole sentences, revealing the complexity of the brain network machinery that controls speech and language. Speech production is a complex process that requires the orchestration of multiple brain regions. However, our current understanding of the large-scale neural architecture during speaking remains scant, as research has mostly focused on examining distinct brain circuits involved in distinct aspects of speech control. Here, we performed graph theoretical analyses of functional MRI data acquired from healthy subjects in order to reveal how brain regions relate to one another while speaking. We constructed functional brain networks of increasing hierarchy from rest to simple vocal motor output to the production of real-life speech, and compared these to nonspeech control tasks such as finger tapping and pure tone discrimination. We discovered a specialized network of densely connected sensorimotor regions, which formed a common processing core across all conditions. Specifically, the primary sensorimotor cortex participated in multiple functional domains across different networks and modulated long-range connections depending on task content, which challenges the established concept of low-order unimodal function of this region. Compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively formed the functional speech connectome.
Collapse
Affiliation(s)
- Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
18
|
Scharinger M, Henry MJ, Obleser J. Acoustic cue selection and discrimination under degradation: differential contributions of the inferior parietal and posterior temporal cortices. Neuroimage 2014; 106:373-81. [PMID: 25481793 DOI: 10.1016/j.neuroimage.2014.11.050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2014] [Revised: 10/10/2014] [Accepted: 11/23/2014] [Indexed: 11/26/2022] Open
Abstract
Auditory categorization is a vital skill for perceiving the acoustic environment. Categorization depends on the discriminability of the sensory input as well as on the ability of the listener to adaptively make use of the relevant features of the sound. Previous studies on categorization have focused either on speech sounds when studying discriminability or on visual stimuli when assessing optimal cue utilization. Here, by contrast, we examined neural sensitivity to stimulus discriminability and optimal cue utilization when categorizing novel, non-speech auditory stimuli not affected by long-term familiarity. In a functional magnetic resonance imaging (fMRI) experiment, listeners categorized sounds from two category distributions, differing along two acoustic dimensions: spectral shape and duration. By introducing spectral degradation after the first half of the experiment, we manipulated both stimulus discriminability and the relative informativeness of acoustic cues. Degradation caused an overall decrease in discriminability based on spectral shape, and therefore enhanced the informativeness of duration. A relative increase in duration-cue utilization was accompanied by increased activity in left parietal cortex. Further, discriminability modulated right planum temporale activity to a higher degree when stimuli were spectrally degraded than when they were not. These findings provide support for separable contributions of parietal and posterior temporal areas to perceptual categorization. The parietal cortex seems to support the selective utilization of informative stimulus cues, while the posterior superior temporal cortex as a primarily auditory brain area supports discriminability particularly under acoustic degradation.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
19
|
Harinen K, Rinne T. Acoustical and categorical tasks differently modulate activations of human auditory cortex to vowels. BRAIN AND LANGUAGE 2014; 138:71-79. [PMID: 25313844 DOI: 10.1016/j.bandl.2014.09.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Revised: 09/07/2014] [Accepted: 09/21/2014] [Indexed: 06/04/2023]
Abstract
The present study compared activations to prototype, nonprototype, nonphonemic, and cross-category vowel pairs during vowel discrimination, category discrimination, 2-back, and visual tasks. Our results support previous findings that areas of the superior temporal gyrus (STG) are sensitive to the speech-level difference between prototype vs. nonprototype and phonemic vs. nonphonemic vowels. Further, consistent with previous studies, we found enhanced activations in anterior-posterior STG and inferior parietal lobule (IPL) during the vowel discrimination and 2-back tasks, respectively. Unlike the vowel discrimination task, the category discrimination task was associated with strong IPL activations. Our results provide evidence that activations in STG and IPL strongly depend on whether the task requires analysis of detailed acoustical information or operations on categorical representations. Based on previous studies investigating activations during categorical pitch and spatial tasks, we argue that this distinction is probably not specific to speech.
Collapse
Affiliation(s)
- Kirsi Harinen
- Institute of Behavioural Sciences, University of Helsinki, Finland.
| | - Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto University School of Science, Finland
| |
Collapse
|
20
|
Elmer S, Klein C, Kühnis J, Liem F, Meyer M, Jäncke L. Music and Language Expertise Influence the Categorization of Speech and Musical Sounds: Behavioral and Electrophysiological Measurements. J Cogn Neurosci 2014; 26:2356-69. [DOI: 10.1162/jocn_a_00632] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Abstract
In this study, we used high-density EEG to evaluate whether speech and music expertise has an influence on the categorization of expertise-related and unrelated sounds. With this purpose in mind, we compared the categorization of speech, music, and neutral sounds between professional musicians, simultaneous interpreters (SIs), and controls in response to morphed speech–noise, music–noise, and speech–music continua. Our hypothesis was that music and language expertise will strengthen the memory representations of prototypical sounds, which act as a perceptual magnet for morphed variants. This means that the prototype would “attract” variants. This so-called magnet effect should be manifested by an increased assignment of morphed items to the trained category, by a reduced maximal slope of the psychometric function, as well as by differential event-related brain responses reflecting memory comparison processes (i.e., N400 and P600 responses). As a main result, we provide first evidence for a domain-specific behavioral bias of musicians and SIs toward the trained categories, namely music and speech. In addition, SIs showed a bias toward musical items, indicating that interpreting training has a generic influence on the cognitive representation of spectrotemporal signals with similar acoustic properties to speech sounds. Notably, EEG measurements revealed clear distinct N400 and P600 responses to both prototypical and ambiguous items between the three groups at anterior, central, and posterior scalp sites. These differential N400 and P600 responses represent synchronous activity occurring across widely distributed brain networks, and indicate a dynamical recruitment of memory processes that vary as a function of training and expertise.
Collapse
Affiliation(s)
| | | | | | | | - Martin Meyer
- 1University of Zurich
- 2Center for Integrative Human Physiology, Zurich, Switzerland
- 3International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland
| | - Lutz Jäncke
- 1University of Zurich
- 2Center for Integrative Human Physiology, Zurich, Switzerland
- 3International Normal Aging and Plasticity Imaging Center, Zurich, Switzerland
- 4King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
21
|
Scharinger M, Herrmann B, Nierhaus T, Obleser J. Simultaneous EEG-fMRI brain signatures of auditory cue utilization. Front Neurosci 2014; 8:137. [PMID: 24926232 PMCID: PMC4044900 DOI: 10.3389/fnins.2014.00137] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Accepted: 05/17/2014] [Indexed: 11/13/2022] Open
Abstract
Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD) responses in fMRI and electroencephalograms (EEGs). In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale). In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a) increased alpha power mediates the inhibition of uninformative (here spectral) stimulus features, and that (b) the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Björn Herrmann
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Till Nierhaus
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| |
Collapse
|
22
|
Ley A, Vroomen J, Formisano E. How learning to abstract shapes neural sound representations. Front Neurosci 2014; 8:132. [PMID: 24917783 PMCID: PMC4043152 DOI: 10.3389/fnins.2014.00132] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2014] [Accepted: 05/14/2014] [Indexed: 12/04/2022] Open
Abstract
The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations.
Collapse
Affiliation(s)
- Anke Ley
- Department of Medical Psychology and Neuropsychology, Tilburg School of Social and Behavioral Sciences, Tilburg University Tilburg, Netherlands ; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Jean Vroomen
- Department of Medical Psychology and Neuropsychology, Tilburg School of Social and Behavioral Sciences, Tilburg University Tilburg, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| |
Collapse
|
23
|
Frühholz S, Grandjean D. Processing of emotional vocalizations in bilateral inferior frontal cortex. Neurosci Biobehav Rev 2013; 37:2847-55. [PMID: 24161466 DOI: 10.1016/j.neubiorev.2013.10.007] [Citation(s) in RCA: 71] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2013] [Revised: 08/09/2013] [Accepted: 10/14/2013] [Indexed: 12/16/2022]
Abstract
A current view proposes that the right inferior frontal cortex (IFC) is particularly responsible for attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Although some studies seem to support this view, an exhaustive review of all recent imaging studies points to an important functional role of both the right and the left IFC in processing vocal emotions. Second, besides a supposed predominant role of the IFC for an attentive processing and evaluation of emotional voices in IFC, these recent studies also point to a possible role of the IFC in preattentive and implicit processing of vocal emotions. The studies specifically provide evidence that both the right and the left IFC show a similar anterior-to-posterior gradient of functional activity in response to emotional vocalizations. This bilateral IFC gradient depends both on the nature or medium of emotional vocalizations (emotional prosody versus nonverbal expressions) and on the level of attentive processing (explicit versus implicit processing), closely resembling the distribution of terminal regions of distinct auditory pathways, which provide either global or dynamic acoustic information. Here we suggest a functional distribution in which several IFC subregions process different acoustic information conveyed by emotional vocalizations. Although the rostro-ventral IFC might categorize emotional vocalizations, the caudo-dorsal IFC might be specifically sensitive to their temporal features.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | | |
Collapse
|
24
|
Scharinger M, Henry MJ, Erb J, Meyer L, Obleser J. Thalamic and parietal brain morphology predicts auditory category learning. Neuropsychologia 2013; 53:75-83. [PMID: 24035788 DOI: 10.1016/j.neuropsychologia.2013.09.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Revised: 09/02/2013] [Accepted: 09/04/2013] [Indexed: 01/13/2023]
Abstract
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Julia Erb
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lars Meyer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
25
|
Vandermosten M, Poelmans H, Sunaert S, Ghesquière P, Wouters J. White matter lateralization and interhemispheric coherence to auditory modulations in normal reading and dyslexic adults. Neuropsychologia 2013; 51:2087-99. [PMID: 23872049 DOI: 10.1016/j.neuropsychologia.2013.07.008] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2012] [Revised: 07/04/2013] [Accepted: 07/10/2013] [Indexed: 10/26/2022]
Abstract
Neural activation of slow acoustic variations that are important for syllable identification is more lateralized to the right hemisphere than activation of fast acoustic changes that are important for phoneme identification. It has been suggested that this complementary function at different hemispheres is rooted in a different degree of white matter myelination in the left versus right hemisphere. The present study will investigate this structure-function relationship with Diffusion Tensor Imaging (DTI) and Auditory Steady-State Responses (ASSR), respectively. With DTI we examined white matter lateralization in the cortical auditory and language regions (i.e. posterior region of the superior temporal gyrus and the arcuate fasciculus) and white matter integrity in the splenium of the corpus callosum. With ASSR we examined interhemispheric coherence to slow, syllabic-rate (i.e. 4 Hz) and fast, phonemic-rate (i.e. 20 Hz) modulations. These structural and functional techniques were applied in a group of normal reading adults and a group of dyslexic adults for whom previously reduced functional interhemispheric connectivity at 20 Hz has been reported (Poelmans et al. (2012). Ear and Hearing, 33, 134-143). This sample was chosen since it is hypothesized that in dyslexic readers insufficient hemispheric asymmetry in myelination might relate to their auditory and phonological problems. Results demonstrate reduced white matter lateralization in the posterior superior temporal gyrus and the arcuate fasciculus in the dyslexic readers. Additionally, white matter lateralization in the posterior superior temporal gyrus and white matter integrity in the splenium of the corpus callosum related to interhemispheric coherence to phonemic-rate modulations (i.e. 20 Hz). Interestingly, this correlation pattern was opposite in normal versus dyslexic readers. These results might imply that less pronounced left white matter dominance in dyslexic adults might relate to their problems to process phonemic-rate acoustic information and to integrate them into the phonological system.
Collapse
Affiliation(s)
- Maaike Vandermosten
- ExpORL, Department of Neurosciences, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; Parenting and Special Education Research Unit, KU Leuven, Leopold Vanderkelenstraat 32, PO Box 3765, 3000 Leuven, Belgium; Radiology Section, KU Leuven, Herestraat 49, 3000 Leuven, Belgium.
| | | | | | | | | |
Collapse
|
26
|
Abstract
Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.
Collapse
|
27
|
Alvarenga KDF, Vicente LC, Lopes RCF, Silva RAD, Banhara MR, Lopes AC, Jacob-Corteletti LCB. The influence of speech stimuli contrast in cortical auditory evoked potentials. Braz J Otorhinolaryngol 2013; 79:336-41. [PMID: 23743749 PMCID: PMC9443885 DOI: 10.5935/1808-8694.20130059] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2012] [Accepted: 01/19/2013] [Indexed: 11/20/2022] Open
Abstract
Objective Method Results Conclusion
Collapse
Affiliation(s)
- Kátia de Freitas Alvarenga
- Department of Speech and Hearing Therapy, School of Dentistry, University of São Paulo, Bauru campus, Brazil.
| | | | | | | | | | | | | |
Collapse
|
28
|
Harinen K, Rinne T. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks. Neuroimage 2013; 77:279-87. [PMID: 23567885 DOI: 10.1016/j.neuroimage.2013.03.064] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2012] [Revised: 03/20/2013] [Accepted: 03/23/2013] [Indexed: 12/01/2022] Open
Abstract
We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels.
Collapse
Affiliation(s)
- Kirsi Harinen
- Institute of Behavioural Sciences, University of Helsinki, Finland
| | | |
Collapse
|
29
|
Husain FT, Patkin DJ, Kim J, Braun AR, Horwitz B. Dissociating neural correlates of meaningful emblems from meaningless gestures in deaf signers and hearing non-signers. Brain Res 2012; 1478:24-35. [PMID: 22968047 PMCID: PMC3477813 DOI: 10.1016/j.brainres.2012.08.029] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2012] [Revised: 07/23/2012] [Accepted: 08/15/2012] [Indexed: 11/28/2022]
Abstract
Emblems are meaningful, culturally-specific hand gestures that are analogous to words. In this fMRI study, we contrasted the processing of emblematic gestures with meaningless gestures by pre-lingually Deaf and hearing participants. Deaf participants, who used American Sign Language, activated bilateral auditory processing and associative areas in the temporal cortex to a greater extent than the hearing participants while processing both types of gestures relative to rest. The hearing non-signers activated a diverse set of regions, including those implicated in the mirror neuron system, such as premotor cortex (BA 6) and inferior parietal lobule (BA 40) for the same contrast. Further, when contrasting the processing of meaningful to meaningless gestures (both relative to rest), the Deaf participants, but not the hearing, showed greater response in the left angular and supramarginal gyri, regions that play important roles in linguistic processing. These results suggest that whereas the signers interpreted emblems to be comparable to words, the non-signers treated emblems as similar to pictorial descriptions of the world and engaged the mirror neuron system.
Collapse
Affiliation(s)
- Fatima T Husain
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, USA.
| | | | | | | | | |
Collapse
|
30
|
Banerjee A, Pillai AS, Sperling JR, Smith JF, Horwitz B. Temporal microstructure of cortical networks (TMCN) underlying task-related differences. Neuroimage 2012; 62:1643-57. [PMID: 22728151 PMCID: PMC3408836 DOI: 10.1016/j.neuroimage.2012.06.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2012] [Revised: 06/02/2012] [Accepted: 06/08/2012] [Indexed: 12/01/2022] Open
Abstract
Neuro-electromagnetic recording techniques (EEG, MEG, iEEG) provide high temporal resolution data to study the dynamics of neurocognitive networks: large scale neural assemblies involved in task-specific information processing. How does a neurocognitive network reorganize spatiotemporally on the order of a few milliseconds to process specific aspects of the task? At what times do networks segregate for task processing, and at what time scales does integration of information occur via changes in functional connectivity? Here, we propose a data analysis framework-Temporal microstructure of cortical networks (TMCN)-that answers these questions for EEG/MEG recordings in the signal space. Method validation is established on simulated MEG data from a delayed-match to-sample (DMS) task. We then provide an example application on MEG recordings during a paired associate task (modified from the simpler DMS paradigm) designed to study modality specific long term memory recall. Our analysis identified the times at which network segregation occurs for processing the memory recall of an auditory object paired to a visual stimulus (visual-auditory) in comparison to an analogous visual-visual pair. Across all subjects, onset times for first network divergence appeared within a range of 0.08-0.47 s after initial visual stimulus onset. This indicates that visual-visual and visual auditory memory recollection involves equivalent network components without any additional recruitment during an initial period of the sensory processing stage which is then followed by recruitment of additional network components for modality specific memory recollection. Therefore, we propose TMCN as a viable computational tool for extracting network timing in various cognitive tasks.
Collapse
Affiliation(s)
- Arpan Banerjee
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | | | |
Collapse
|
31
|
Zheng W, Ackley ES, Martínez-Ramón M, Posse S. Spatially aggregated multiclass pattern classification in functional MRI using optimally selected functional brain areas. Magn Reson Imaging 2012; 31:247-61. [PMID: 22902471 DOI: 10.1016/j.mri.2012.07.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2012] [Revised: 06/16/2012] [Accepted: 07/18/2012] [Indexed: 11/27/2022]
Abstract
In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation.
Collapse
Affiliation(s)
- Weili Zheng
- Department of Neurology, School of Medicine, University of New Mexico, Albuquerque, NM, USA.
| | | | | | | |
Collapse
|
32
|
Yoo S, Chung JY, Jeon HA, Lee KM, Kim YB, Cho ZH. Dual routes for verbal repetition: articulation-based and acoustic-phonetic codes for pseudoword and word repetition, respectively. BRAIN AND LANGUAGE 2012; 122:1-10. [PMID: 22632812 DOI: 10.1016/j.bandl.2012.04.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2011] [Revised: 04/18/2012] [Accepted: 04/20/2012] [Indexed: 06/01/2023]
Abstract
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary.
Collapse
Affiliation(s)
- Sejin Yoo
- Interdisciplinary Program in Cognitive Science, Seoul National University, Republic of Korea
| | | | | | | | | | | |
Collapse
|
33
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1281] [Impact Index Per Article: 106.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
34
|
Categorical speech processing in Broca's area: an fMRI study using multivariate pattern-based analysis. J Neurosci 2012; 32:3942-8. [PMID: 22423114 DOI: 10.1523/jneurosci.3814-11.2012] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In this functional magnetic resonance imaging (fMRI) study, we probed the cortical regions mediating categorical speech perception using an advanced brain-mapping technique, whole-brain multivariate pattern-based analysis (MVPA). Normal healthy human subjects (native English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-/da/ continuum. Outside of the scanner, individuals' own category boundaries were measured to divide the fMRI data into /ba/ and /da/ conditions per subject. The whole-brain MVPA revealed that Broca's area and the left pre-supplementary motor area evoked distinct neural activity patterns between the two perceptual categories (/ba/ vs /da/). Broca's area was also found when the same analysis was applied to another dataset (Raizada and Poldrack, 2007), which previously yielded the supramarginal gyrus using a univariate adaptation-fMRI paradigm. The consistent MVPA findings from two independent datasets strongly indicate that Broca's area participates in categorical speech perception, with a possible role of translating speech signals into articulatory codes. The difference in results between univariate and multivariate pattern-based analyses of the same data suggest that processes in different cortical areas along the dorsal speech perception stream are distributed on different spatial scales.
Collapse
|
35
|
Banerjee A, Pillai AS, Horwitz B. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution. Front Syst Neurosci 2012; 5:102. [PMID: 22291621 PMCID: PMC3258667 DOI: 10.3389/fnsys.2011.00102] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2011] [Accepted: 12/16/2011] [Indexed: 12/20/2022] Open
Abstract
Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level.
Collapse
Affiliation(s)
- Arpan Banerjee
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health (NIH) Bethesda, MD, USA
| | | | | |
Collapse
|
36
|
Discrimination task reveals differences in neural bases of tinnitus and hearing impairment. PLoS One 2011; 6:e26639. [PMID: 22066003 PMCID: PMC3204998 DOI: 10.1371/journal.pone.0026639] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2011] [Accepted: 09/30/2011] [Indexed: 11/19/2022] Open
Abstract
We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI). Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN), bilateral hearing loss without tinnitus (HL), and normal hearing without tinnitus (NH). We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone.
Collapse
|
37
|
Rong F, Holroyd T, Husain FT, Contreras-Vidal JL, Horwitz B. Task-specific modulation of human auditory evoked response in a delayed-match-to-sample task. Front Psychol 2011; 2:85. [PMID: 21687454 PMCID: PMC3110394 DOI: 10.3389/fpsyg.2011.00085] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2011] [Accepted: 04/21/2011] [Indexed: 12/01/2022] Open
Abstract
In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography data while participants were performing an auditory delayed-match-to-sample (DMS) task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12∼20 Hz) DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to be involved in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal–temporal functional interactions.
Collapse
Affiliation(s)
- Feng Rong
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health Bethesda, MD, USA
| | | | | | | | | |
Collapse
|
38
|
Mitterauer BJ, Kofler-Westergren B. Possible effects of synaptic imbalances on oligodendrocyte-axonic interactions in schizophrenia: a hypothetical model. Front Psychiatry 2011; 2:15. [PMID: 21647404 PMCID: PMC3102422 DOI: 10.3389/fpsyt.2011.00015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2011] [Accepted: 03/28/2011] [Indexed: 11/13/2022] Open
Abstract
A model of glial-neuronal interactions is proposed that could be explanatory for the demyelination identified in brains with schizophrenia. It is based on two hypotheses: (1) that glia-neuron systems are functionally viable and important for normal brain function, and (2) that disruption of this postulated function disturbs the glial categorization function, as shown by formal analysis. According to this model, in schizophrenia receptors on astrocytes in glial-neuronal synaptic units are not functional, loosing their modulatory influence on synaptic neurotransmission. Hence, an unconstrained neurotransmission flux occurs that hyperactivates the axon and floods the cognate receptors of neurotransmitters on oligodendrocytes. The excess of neurotransmitters may have a toxic effect on oligodendrocytes and myelin, causing demyelination. In parallel, an increasing impairment of axons may disconnect neuronal networks. It is formally shown how oligodendrocytes normally categorize axonic information processing via their processes. Demyelination decomposes the oligodendrocyte-axonic system making it incapable to generate categories of information. This incoherence may be responsible for symptoms of disorganization in schizophrenia, such as thought disorder, inappropriate affect and incommunicable motor behavior. In parallel, the loss of oligodendrocytes affects gap junctions in the panglial syncytium, presumably responsible for memory impairment in schizophrenia.
Collapse
Affiliation(s)
- Bernhard J. Mitterauer
- Volitronics – Institute for Basic Research, Psychopathology and Brain PhilosophyWals/Salzburg, Austria
| | | |
Collapse
|
39
|
Dayalu VN, Guntupalli VK, Kalinowski J, Stuart A, Saltuklaroglu T, Rastatter MP. Effect of continuous speech and non-speech signals on stuttering frequency in adults who stutter. LOGOP PHONIATR VOCO 2011; 36:121-7. [PMID: 21385148 DOI: 10.3109/14015439.2011.562535] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The inhibitory effects of continuously presented audio signals (/a/, /s/, 1,000 Hz pure-tone) on stuttering were examined. Eleven adults who stutter participated. Participants read four 300-syllable passages (i.e. in the presence and absence of the audio signals). All of the audio signals induced a significant reduction in stuttering frequency relative to the control condition (P = 0.005). A significantly greater reduction in stuttering occurred in the /a/ condition (P < 0.05), while there was no significant difference between the /s/ and 1,000 Hz pure-tone conditions (P > 0.05). These findings are consistent with the notion that the percept of a second signal as speech or non-speech can respectively augment or attenuate the potency for reducing stuttering frequency.
Collapse
Affiliation(s)
- Vikram N Dayalu
- Department of Speech-Language Pathology, Seton Hall University, South Orange, NJ 07079, USA.
| | | | | | | | | | | |
Collapse
|
40
|
Tsunada J, Lee JH, Cohen YE. Representation of speech categories in the primate auditory cortex. J Neurophysiol 2011; 105:2634-46. [PMID: 21346209 DOI: 10.1152/jn.00037.2011] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A "ventral" auditory pathway in nonhuman primates that originates in the core auditory cortex and ends in the prefrontal cortex is thought to be involved in components of nonspatial auditory processing. Previous work from our laboratory has indicated that neurons in the prefrontal cortex reflect monkeys' decisions during categorical judgments. Here, we tested the role of the superior temporal gyrus (STG), a region of the secondary auditory cortex that is part of this ventral pathway, during similar categorical judgments. While monkeys participated in a match-to-category task and reported whether two consecutive auditory stimuli belonged to the same category or to different categories, we recorded spiking activity from STG neurons. The auditory stimuli were morphs of two human-speech sounds (bad and dad). We found that STG neurons represented auditory categories. However, unlike activity in the prefrontal cortex, STG activity was not modulated by the monkeys' behavioral reports (choices). This finding is consistent with the anterolateral STG's role as a part of functional circuit involved in the coding, representation, and perception of the nonspatial features of an auditory stimulus.
Collapse
Affiliation(s)
- Joji Tsunada
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania School of Medicine, Philadelphia, PA 19104, USA
| | | | | |
Collapse
|
41
|
Turkeltaub PE, Coslett HB. Localization of sublexical speech perception components. BRAIN AND LANGUAGE 2010; 114:1-15. [PMID: 20413149 PMCID: PMC2914564 DOI: 10.1016/j.bandl.2010.03.008] [Citation(s) in RCA: 175] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2009] [Revised: 03/22/2010] [Accepted: 03/28/2010] [Indexed: 05/04/2023]
Abstract
Models of speech perception are in general agreement with respect to the major cortical regions involved, but lack precision with regard to localization and lateralization of processing units. To refine these models we conducted two Activation Likelihood Estimation (ALE) meta-analyses of the neuroimaging literature on sublexical speech perception. Based on foci reported in 23 fMRI experiments, we identified significant activation likelihoods in left and right superior temporal cortex and the left posterior middle frontal gyrus. Sub-analyses examining phonetic and phonological processes revealed only left mid-posterior superior temporal sulcus activation likelihood. A lateralization analysis demonstrated temporal lobe left lateralization in terms of magnitude, extent, and consistency of activity. Experiments requiring explicit attention to phonology drove this lateralization. An ALE analysis of eight fMRI studies on categorical phoneme perception revealed significant activation likelihood in the left supramarginal gyrus and angular gyrus. These results are consistent with a speech processing network in which the bilateral superior temporal cortices perform acoustic analysis of speech and non-speech auditory stimuli, the left mid-posterior superior temporal sulcus performs phonetic and phonological analysis, and the left inferior parietal lobule is involved in detection of differences between phoneme categories. These results modify current speech perception models in three ways: (1) specifying the most likely locations of dorsal stream processing units, (2) clarifying that phonetic and phonological superior temporal sulcus processing is left lateralized and localized to the mid-posterior portion, and (3) suggesting that both the supramarginal gyrus and angular gyrus may be involved in phoneme discrimination.
Collapse
Affiliation(s)
- Peter E Turkeltaub
- Department of Neurology, University of Pennsylvania, 3400 Spruce Street, 3 West Gates Building, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
42
|
Adults with dyslexia are impaired in categorizing speech and nonspeech sounds on the basis of temporal cues. Proc Natl Acad Sci U S A 2010; 107:10389-94. [PMID: 20498069 DOI: 10.1073/pnas.0912858107] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Developmental dyslexia is characterized by severe reading and spelling difficulties that are persistent and resistant to the usual didactic measures and remedial efforts. It is well established that a major cause of these problems lies in poorly specified representations of speech sounds. One hypothesis states that this phonological deficit results from a more fundamental deficit in auditory processing. Despite substantial research effort, the specific nature of these auditory problems remains debated. A first controversy concerns the speech specificity of the auditory processing problems: Can they be reduced to more basic auditory processing, or are they specific to the perception of speech sounds? A second topic of debate concerns the extent to which the auditory problems are specific to the processing of rapidly changing temporal information or whether they encompass a broader range of complex spectro-temporal processing. By applying a balanced design with stimuli that were adequately controlled for acoustic complexity, we show that adults with dyslexia are specifically impaired at categorizing speech and nonspeech sounds that differ in terms of rapidly changing acoustic cues (i.e., temporal cues), but that they perform adequately when categorizing steady-state speech and nonspeech sounds. Thus, we show that individuals with dyslexia have an auditory temporal processing deficit that is not speech-specific.
Collapse
|
43
|
Modeling the categorical perception of speech sounds: a step toward biological plausibility. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2009; 9:304-13. [PMID: 19679765 DOI: 10.3758/cabn.9.3.304] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Our native language has a lifelong effect on how we perceive speech sounds. Behaviorally, this is manifested as categorical perception, but the neural mechanisms underlying this phenomenon are still unknown. Here, we constructed a computational model of categorical perception, following principles consistent with infant speech learning. A self-organizing network was exposed to a statistical distribution of speech input presented as neural activity patterns of the auditory periphery, resembling the way sound arrives to the human brain. In the resulting neural map, categorical perception emerges from most single neurons of the model being maximally activated by prototypical speech sounds, while the largest variability in activity is produced at category boundaries. Consequently, regions in the vicinity of prototypes become perceptually compressed, and regions at category boundaries become expanded. Thus, the present study offers a unifying framework for explaining the neural basis of the warping of perceptual space associated with categorical perception.
Collapse
|
44
|
Husain FT, Patkin DJ, Thai-Van H, Braun AR, Horwitz B. Distinguishing the processing of gestures from signs in deaf individuals: an fMRI study. Brain Res 2009; 1276:140-50. [PMID: 19397900 DOI: 10.1016/j.brainres.2009.04.034] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2008] [Revised: 04/13/2009] [Accepted: 04/14/2009] [Indexed: 11/16/2022]
Abstract
Manual gestures occur on a continuum from co-speech gesticulations to conventionalized emblems to language signs. Our goal in the present study was to understand the neural bases of the processing of gestures along such a continuum. We studied four types of gestures, varying along linguistic and semantic dimensions: linguistic and meaningful American Sign Language (ASL), non-meaningful pseudo-ASL, meaningful emblematic, and nonlinguistic, non-meaningful made-up gestures. Pre-lingually deaf, native signers of ASL participated in the fMRI study and performed two tasks while viewing videos of the gestures: a visuo-spatial (identity) discrimination task and a category discrimination task. We found that the categorization task activated left ventral middle and inferior frontal gyrus, among other regions, to a greater extent compared to the visual discrimination task, supporting the idea of semantic-level processing of the gestures. The reverse contrast resulted in enhanced activity of bilateral intraparietal sulcus, supporting the idea of featural-level processing (analogous to phonological-level processing of speech sounds) of the gestures. Regardless of the task, we found that brain activation patterns for the nonlinguistic, non-meaningful gestures were the most different compared to the ASL gestures. The activation patterns for the emblems were most similar to those of the ASL gestures and those of the pseudo-ASL were most similar to the nonlinguistic, non-meaningful gestures. The fMRI results provide partial support for the conceptualization of different gestures as belonging to a continuum and the variance in the fMRI results was best explained by differences in the processing of gestures along the semantic dimension.
Collapse
Affiliation(s)
- Fatima T Husain
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, USA.
| | | | | | | | | |
Collapse
|
45
|
Chang SE, Kenney MK, Loucks TMJ, Poletto CJ, Ludlow CL. Common neural substrates support speech and non-speech vocal tract gestures. Neuroimage 2009; 47:314-25. [PMID: 19327400 DOI: 10.1016/j.neuroimage.2009.03.032] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2008] [Revised: 02/23/2009] [Accepted: 03/11/2009] [Indexed: 10/21/2022] Open
Abstract
The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.
Collapse
Affiliation(s)
- Soo-Eun Chang
- Laryngeal and Speech Section, Medical Neurology Branch, NINDS/NIH, 10 Center Dr. MSC 1416 Building 10, Room 5D38, Bethesda, MD 20892, USA
| | | | | | | | | |
Collapse
|
46
|
König R, Sieluzycki C, Simserides C, Heil P, Scheich H. Effects of the task of categorizing FM direction on auditory evoked magnetic fields in the human auditory cortex. Brain Res 2008; 1220:102-17. [PMID: 18420183 DOI: 10.1016/j.brainres.2008.02.086] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2008] [Revised: 02/25/2008] [Accepted: 02/27/2008] [Indexed: 10/22/2022]
Abstract
We examined effects of the task of categorizing linear frequency-modulated (FM) sweeps into rising and falling on auditory evoked magnetic fields (AEFs) from the human auditory cortex, recorded by means of whole-head magnetoencephalography. AEFs in this task condition were compared with those in a passive condition where subjects had been asked to just passively listen to the same stimulus material. We found that the M100-peak latency was significantly shorter for the task condition than for the passive condition in the left but not in the right hemisphere. Furthermore, the M100-peak latency was significantly shorter in the right than in the left hemisphere for the passive and the task conditions. In contrast, the M100-peak amplitude did not differ significantly between conditions, nor between hemispheres. We also analyzed the activation strength derived from the integral of the absolute magnetic field over constant time windows between stimulus onset and 260 ms. We isolated an early, narrow time range between about 60 ms and 80 ms that showed larger values in the task condition, most prominently in the right hemisphere. These results add to other imaging and lesion studies which suggest a specific role of the right auditory cortex in identifying FM sweep direction and thus in categorizing FM sweeps into rising and falling.
Collapse
Affiliation(s)
- Reinhard König
- Leibniz Institute for Neurobiology, Brenneckestrasse 6, 39118 Magdeburg, Germany
| | | | | | | | | |
Collapse
|
47
|
The incoherence hypothesis of schizophrenia: based on decomposed oligodendrocyte-axonic relations. Med Hypotheses 2007; 69:1299-304. [PMID: 17502129 DOI: 10.1016/j.mehy.2007.03.024] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2007] [Accepted: 03/25/2007] [Indexed: 10/23/2022]
Abstract
Based on the findings of white matter abnormalities in brains with schizophrenia, it is hypothesized that this disorder may be responsible for symptoms of incoherence of schizophrenia. It is supposed that the processes of oligodendrocytes tie the various properties of axonic information conductance together into categories. For this oligodendrocytic computation capacity a formalism is proposed. In the case of a decrease or loss of oligodendroglia, a brain with schizophrenia is unable to categorize information processing, so that on a behavioral level symptoms of incoherence (thought disorder, etc.) occur. Similarities and differences in the pathophysiology of multiple sclerosis and schizophrenia are also shortly discussed. Together, a decomposition of the oligodendrocyte-axonic system may be responsible for symptoms of incoherence in schizophrenia.
Collapse
|
48
|
Selezneva E, Scheich H, Brosch M. Dual time scales for categorical decision making in auditory cortex. Curr Biol 2007; 16:2428-33. [PMID: 17174917 DOI: 10.1016/j.cub.2006.10.027] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2006] [Revised: 10/06/2006] [Accepted: 10/09/2006] [Indexed: 11/29/2022]
Abstract
Category formation allows us to group perceptual objects into meaningful classes and is fundamental to cognition. Categories can be derived from similarity relationships of object features by using prototypes or multiple exemplars, or from abstract relationships of features and rules . A variety of brain areas have been implicated in categorization processes, but mechanistic insights on the single-cell and local-network level are still rare and limited to the matching of individual objects to categories . For directional categorization of tone steps, as in melody recognition , abstract relationships between sequential events (higher or lower in frequency) have to be formed. To explore the neuronal mechanisms of this categorical identification of step direction, we trained monkeys for more than two years on a contour-discrimination task with multiple tone sequences. In the auditory cortex of these highly trained monkeys, we identified two interrelated types of neuronal firing: Increased phasic responses to tones categorically represented the reward-predicting downward frequency steps and not upward steps; subsequently, slow modulations of tonic firing predicted the behavioral decisions of the monkeys, including errors. Our results on neuronal mechanisms of categorical stimulus identification and of decision making attribute a cognitive role to auditory cortex, in addition to its role in signal processing.
Collapse
Affiliation(s)
- Elena Selezneva
- Leibniz-Institut für Neurobiologie, Brenneckestrasse 6, 39118 Magdeburg, Germany
| | | | | |
Collapse
|
49
|
Husain FT, Horwitz B. Experimental-neuromodeling framework for understanding auditory object processing: integrating data across multiple scales. JOURNAL OF PHYSIOLOGY, PARIS 2006; 100:133-41. [PMID: 17079121 PMCID: PMC1941673 DOI: 10.1016/j.jphysparis.2006.09.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
In this article, we review a combined experimental-neuromodeling framework for understanding brain function with a specific application to auditory object processing. Within this framework, a model is constructed using the best available experimental data and is used to make predictions. The predictions are verified by conducting specific or directed experiments and the resulting data are matched with the simulated data. The model is refined or tested on new data and generates new predictions. The predictions in turn lead to better-focused experiments. The auditory object processing model was constructed using available neurophysiological and neuroanatomical data from mammalian studies of auditory object processing in the cortex. Auditory objects are brief sounds such as syllables, words, melodic fragments, etc. The model can simultaneously simulate neuronal activity at a columnar level and neuroimaging activity at a systems level while processing frequency-modulated tones in a delayed-match-to-sample task. The simulated neuroimaging activity was quantitatively matched with neuroimaging data obtained from experiments; both the simulations and the experiments used similar tasks, sounds, and other experimental parameters. We then used the model to investigate the neural bases of the auditory continuity illusion, a type of perceptual grouping phenomenon, without changing any of its parameters. Perceptual grouping enables the auditory system to integrate brief, disparate sounds into cohesive perceptual units. The neural mechanisms underlying auditory continuity illusion have not been studied extensively with conventional neuroimaging or electrophysiological techniques. Our modeling results agree with behavioral studies in humans and an electrophysiological study in cats. The results predict a particular set of bottom-up cortical processing mechanisms that implement perceptual grouping, and also attest to the robustness of our model.
Collapse
Affiliation(s)
- Fatima T. Husain
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bldg 10, Rm 8S235-D, 9000 Rockville Pike, Bethesda, MD 20892 USA. Emails: ;
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bldg 10, Rm 8S235-D, 9000 Rockville Pike, Bethesda, MD 20892 USA. Emails: ;
| |
Collapse
|