51
|
Flexible Coordinator and Switcher Hubs for Adaptive Task Control. J Neurosci 2020; 40:6949-6968. [PMID: 32732324 PMCID: PMC7470914 DOI: 10.1523/jneurosci.2559-19.2020] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 06/28/2020] [Accepted: 06/30/2020] [Indexed: 11/21/2022] Open
Abstract
Functional connectivity (FC) studies have identified at least two large-scale neural systems that constitute cognitive control networks, the frontoparietal network (FPN) and cingulo-opercular network (CON). Control networks are thought to support goal-directed cognition and behavior. It was previously shown that the FPN flexibly shifts its global connectivity pattern according to task goal, consistent with a "flexible hub" mechanism for cognitive control. Our aim was to build on this finding to develop a functional cartography (a multimetric profile) of control networks in terms of dynamic network properties. We quantified network properties in (male and female) humans using a high-control-demand cognitive paradigm involving switching among 64 task sets. We hypothesized that cognitive control is enacted by the FPN and CON via distinct but complementary roles reflected in network dynamics. Consistent with a flexible "coordinator" mechanism, FPN connections were varied across tasks, while maintaining within-network connectivity to aid cross-region coordination. Consistent with a flexible "switcher" mechanism, CON regions switched to other networks in a task-dependent manner, driven primarily by reduced within-network connections to other CON regions. This pattern of results suggests FPN acts as a dynamic, global coordinator of goal-relevant information, while CON transiently disbands to lend processing resources to other goal-relevant networks. This cartography of network dynamics reveals a dissociation between two prominent cognitive control networks, suggesting complementary mechanisms underlying goal-directed cognition.SIGNIFICANCE STATEMENT Cognitive control supports a variety of behaviors requiring flexible cognition, such as rapidly switching between tasks. Furthermore, cognitive control is negatively impacted in a variety of mental illnesses. We used tools from network science to characterize the implementation of cognitive control by large-scale brain systems. This revealed that two systems, the frontoparietal (FPN) and cingulo-opercular (CON) networks, have distinct but complementary roles in controlling global network reconfigurations. The FPN exhibited properties of a flexible coordinator (orchestrating task changes), while CON acted as a flexible switcher (switching specific regions to other systems to lend processing resources). These findings reveal an underlying distinction in cognitive processes that may be applicable to clinical, educational, and machine learning work targeting cognitive flexibility.
Collapse
|
52
|
Zekveld AA, van Scheepen JAM, Versfeld NJ, Kramer SE, van Steenbergen H. The Influence of Hearing Loss on Cognitive Control in an Auditory Conflict Task: Behavioral and Pupillometry Findings. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2483-2492. [PMID: 32610026 DOI: 10.1044/2020_jslhr-20-00107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The pupil dilation response is sensitive not only to auditory task demand but also to cognitive conflict. Conflict is induced by incompatible trials in auditory Stroop tasks in which participants have to identify the presentation location (left or right ear) of the words "left" or "right." Previous studies demonstrated that the compatibility effect is reduced if the trial is preceded by another incompatible trial (conflict adaptation). Here, we investigated the influence of hearing status on cognitive conflict and conflict adaptation in an auditory Stroop task. Method Two age-matched groups consisting of 32 normal-hearing participants (M age = 52 years, age range: 25-67 years) and 28 participants with hearing impairment (M age = 52 years, age range: 23-64 years) performed an auditory Stroop task. We assessed the effects of hearing status and stimulus compatibility on reaction times (RTs) and pupil dilation responses. We furthermore analyzed the Pearson correlation coefficients between age, degree of hearing loss, and the compatibility effects on the RT and pupil response data across all participants. Results As expected, the RTs were longer and pupil dilation was larger for incompatible relative to compatible trials. Furthermore, these effects were reduced for trials following incompatible (as compared to compatible) trials (conflict adaptation). No general effect of hearing status was observed, but the correlations suggested that higher age and a larger degree of hearing loss were associated with more interference of current incompatibility on RTs. Conclusions Conflict processing and adaptation effects were observed on the RTs and pupil dilation responses in an auditory Stroop task. No general effects of hearing status were observed, but the correlations suggested that higher age and a greater degree of hearing loss were related to reduced conflict processing ability. The current study underlines the relevance of taking into account cognitive control and conflict adaptation processes.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - J A M van Scheepen
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Henk van Steenbergen
- Cognitive Psychology Unit, Institute of Psychology, University of Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| |
Collapse
|
53
|
Yuriko Santos Kawata N, Hashimoto T, Kawashima R. Neural mechanisms underlying concurrent listening of simultaneous speech. Brain Res 2020; 1738:146821. [PMID: 32259518 DOI: 10.1016/j.brainres.2020.146821] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 03/31/2020] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
Can we identify what two people are saying at the same time? Although it is difficult to perfectly repeat two or more simultaneous messages, listeners can report information from both speakers. In a concurrent/divided listening task, enhanced attention and segregation of speech can be required rather than selection and suppression. However, the neural mechanisms of concurrent listening to multi-speaker concurrent speech has yet to be clarified. The present study utilized functional magnetic resonance imaging to examine the neural responses of healthy young adults listening to concurrent male and female speakers in an attempt to reveal the mechanism of concurrent listening. After practice and multiple trials testing concurrent listening, 31 participants achieved performance comparable with that of selective listening. Furthermore, compared to selective listening, concurrent listening induced greater activation in the anterior cingulate cortex, bilateral anterior insula, frontoparietal regions, and the periaqueductal gray region. In addition to the salience network for multi-speaker listening, attentional modulation and enhanced segregation of these signals could be used to achieve successful concurrent listening. These results indicate the presence of a potential mechanism by which one can listen to two voices with enhanced attention to saliency signals.
Collapse
Affiliation(s)
- Natasha Yuriko Santos Kawata
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan
| | - Teruo Hashimoto
- Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan.
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan; Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan
| |
Collapse
|
54
|
Diachek E, Blank I, Siegelman M, Affourtit J, Fedorenko E. The Domain-General Multiple Demand (MD) Network Does Not Support Core Aspects of Language Comprehension: A Large-Scale fMRI Investigation. J Neurosci 2020; 40:4536-4550. [PMID: 32317387 PMCID: PMC7275862 DOI: 10.1523/jneurosci.2036-19.2020] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 03/02/2020] [Accepted: 04/05/2020] [Indexed: 11/21/2022] Open
Abstract
Aside from the language-selective left-lateralized frontotemporal network, language comprehension sometimes recruits a domain-general bilateral frontoparietal network implicated in executive functions: the multiple demand (MD) network. However, the nature of the MD network's contributions to language comprehension remains debated. To illuminate the role of this network in language processing in humans, we conducted a large-scale fMRI investigation using data from 30 diverse word and sentence comprehension experiments (481 unique participants [female and male], 678 scanning sessions). In line with prior findings, the MD network was active during many language tasks. Moreover, similar to the language-selective network, which is robustly lateralized to the left hemisphere, these responses were stronger in the left-hemisphere MD regions. However, in contrast with the language-selective network, the MD network responded more strongly (1) to lists of unconnected words than to sentences, and (2) in paradigms with an explicit task compared with passive comprehension paradigms. Indeed, many passive comprehension tasks failed to elicit a response above the fixation baseline in the MD network, in contrast to strong responses in the language-selective network. Together, these results argue against a role for the MD network in core aspects of sentence comprehension, such as inhibiting irrelevant meanings or parses, keeping intermediate representations active in working memory, or predicting upcoming words or structures. These results align with recent evidence of relatively poor tracking of the linguistic signal by the MD regions during naturalistic comprehension, and instead suggest that the MD network's engagement during language processing reflects effort associated with extraneous task demands.SIGNIFICANCE STATEMENT Domain-general executive processes, such as working memory and cognitive control, have long been implicated in language comprehension, including in neuroimaging studies that have reported activation in domain-general multiple demand (MD) regions for linguistic manipulations. However, much prior evidence has come from paradigms where language interpretation is accompanied by extraneous tasks. Using a large fMRI dataset (30 experiments/481 participants/678 sessions), we demonstrate that MD regions are engaged during language comprehension in the presence of task demands, but not during passive reading/listening, conditions that strongly activate the frontotemporal language network. These results present a fundamental challenge to proposals whereby linguistic computations, such as inhibiting irrelevant meanings, keeping representations active in working memory, or predicting upcoming elements, draw on domain-general executive resources.
Collapse
Affiliation(s)
- Evgeniia Diachek
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37203
| | - Idan Blank
- Department of Psychology, University of California at Los Angeles, Los Angeles, California 90095
| | - Matthew Siegelman
- Department of Psychology, Columbia University, New York, New York 10027
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Psychiatry, Massachusetts General Hospital, Charlestown, Massachusetts 02129
| |
Collapse
|
55
|
Shain C, Blank IA, van Schijndel M, Schuler W, Fedorenko E. fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychologia 2020; 138:107307. [PMID: 31874149 PMCID: PMC7140726 DOI: 10.1016/j.neuropsychologia.2019.107307] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 12/02/2019] [Accepted: 12/13/2019] [Indexed: 11/19/2022]
Abstract
Much research in cognitive neuroscience supports prediction as a canonical computation of cognition across domains. Is such predictive coding implemented by feedback from higher-order domain-general circuits, or is it locally implemented in domain-specific circuits? What information sources are used to generate these predictions? This study addresses these two questions in the context of language processing. We present fMRI evidence from a naturalistic comprehension paradigm (1) that predictive coding in the brain's response to language is domain-specific, and (2) that these predictions are sensitive both to local word co-occurrence patterns and to hierarchical structure. Using a recently developed continuous-time deconvolutional regression technique that supports data-driven hemodynamic response function discovery from continuous BOLD signal fluctuations in response to naturalistic stimuli, we found effects of prediction measures in the language network but not in the domain-general multiple-demand network, which supports executive control processes and has been previously implicated in language comprehension. Moreover, within the language network, surface-level and structural prediction effects were separable. The predictability effects in the language network were substantial, with the model capturing over 37% of explainable variance on held-out data. These findings indicate that human sentence processing mechanisms generate predictions about upcoming words using cognitive processes that are sensitive to hierarchical structure and specialized for language processing, rather than via feedback from high-level executive control mechanisms.
Collapse
Affiliation(s)
| | - Idan Asher Blank
- University of California Los Angeles, 90024, USA; Massachusetts Institute of Technology, 02139, USA.
| | | | - William Schuler
- The Ohio State University, 43210, USA; Massachusetts General Hospital, Program in Speech and Hearing Bioscience and Technology, 02115, USA.
| | - Evelina Fedorenko
- Massachusetts General Hospital, Program in Speech and Hearing Bioscience and Technology, 02115, USA.
| |
Collapse
|
56
|
Rosemann S, Thiel CM. Neural Signatures of Working Memory in Age-related Hearing Loss. Neuroscience 2020; 429:134-142. [PMID: 31935488 DOI: 10.1016/j.neuroscience.2019.12.046] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 12/20/2019] [Accepted: 12/29/2019] [Indexed: 11/17/2022]
Abstract
Age-related hearing loss affects the ability to hear high frequencies and therefore leads to difficulties in understanding speech, particularly under adverse listening conditions. This decrease in hearing can be partly compensated by the recruitment of executive functions, such as working memory. The compensatory effort may, however, lead to a decrease in available neural resources compromising cognitive abilities. We here aim to investigate whether mild to moderate hearing loss impacts prefrontal functions and related executive processes and whether these are related to speech-in-noise perception abilities. Nineteen hard of hearing and nineteen age-matched normal-hearing participants performed a working memory task to drive prefrontal activity, which was gauged with functional magnetic resonance imaging. In addition, speech-in-noise understanding, cognitive flexibility and inhibition control were assessed. Our results showed no differences in frontoparietal activation patterns and working memory performance between normal-hearing and hard of hearing participants. The behavioral assessment of further executive functions, however, provided evidence of lower cognitive flexibility in hard of hearing participants. Cognitive flexibility and hearing abilities further predicted speech-in-noise perception. We conclude that neural and behavioral signatures of working memory are intact in mild to moderate hearing loss. Moreover, cognitive flexibility seems to be closely related to hearing impairment and speech-in-noise perception and should, therefore, be investigated in future studies assessing age-related hearing loss and its implications on prefrontal functions.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
57
|
Rogers CS, Jones MS, McConkey S, Spehar B, Van Engen KJ, Sommers MS, Peelle JE. Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:452-473. [PMID: 34327333 PMCID: PMC8318202 DOI: 10.1162/nol_a_00021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19-30 years) and 32 older adults (aged 65-81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.
Collapse
Affiliation(s)
- Chad S. Rogers
- Department of Psychology, Union College, Schenectady, NY, USA
| | - Michael S. Jones
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Sarah McConkey
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Brent Spehar
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Kristin J. Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Jonathan E. Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
58
|
Chaitanya G, Hinds W, Kragel J, He X, Sideman N, Ezzyat Y, Sperling MR, Sharan A, Tracy JI. Tonic Resting State Hubness Supports High Gamma Activity Defined Verbal Memory Encoding Network in Epilepsy. Neuroscience 2019; 425:194-216. [PMID: 31786346 DOI: 10.1016/j.neuroscience.2019.11.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 10/25/2019] [Accepted: 11/01/2019] [Indexed: 01/06/2023]
Abstract
High gamma activity (HGA) of verbal-memory encoding using invasive-electroencephalogram has laid the foundation for numerous studies testing the integrity of memory in diseased populations. Yet, the functional connectivity characteristics of networks subserving these memory linkages remains uncertain. By integrating this electrophysiological biomarker of memory encoding from IEEG with resting-state BOLD fluctuations, we estimated the segregation and hubness of HGA-memory regions in drug-resistant epilepsy patients and matched healthy controls. HGA-memory regions express distinctly different hubness compared to neighboring regions in health and in epilepsy, and this hubness was more relevant than segregation in predicting verbal memory encoding. The HGA-memory network comprised regions from both the cognitive control and primary processing networks, validating that effective verbal-memory encoding requires integrating brain functions, and is not dominated by a central cognitive core. Our results demonstrate a tonic intrinsic set of functional connectivity, which provides the necessary conditions for effective, phasic, task-dependent memory encoding.
Collapse
Affiliation(s)
- Ganne Chaitanya
- Department of Neurology, Thomas Jefferson University, Philadelphia, PA 19107, United States
| | - Walter Hinds
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA 19104, United States
| | - James Kragel
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Xiaosong He
- Department of Neurology, Thomas Jefferson University, Philadelphia, PA 19107, United States
| | - Noah Sideman
- Department of Neurology, Thomas Jefferson University, Philadelphia, PA 19107, United States
| | - Youssef Ezzyat
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Michael R Sperling
- Department of Neurology, Thomas Jefferson University, Philadelphia, PA 19107, United States
| | - Ashwini Sharan
- Department of Neurosurgery, Thomas Jefferson University, Philadelphia, PA 19107, United States
| | - Joseph I Tracy
- Department of Neurology, Thomas Jefferson University, Philadelphia, PA 19107, United States.
| |
Collapse
|
59
|
Klaus J, Schutter DJLG, Piai V. Transient perturbation of the left temporal cortex evokes plasticity-related reconfiguration of the lexical network. Hum Brain Mapp 2019; 41:1061-1071. [PMID: 31705740 PMCID: PMC7267941 DOI: 10.1002/hbm.24860] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 09/30/2019] [Accepted: 10/28/2019] [Indexed: 12/28/2022] Open
Abstract
While much progress has been made in how brain organization supports language function, the language network's ability to adapt to immediate disturbances by means of reorganization remains unclear. The aim of this study was to examine acute reorganizational changes in brain activity related to conceptual and lexical retrieval in unimpaired language production following transient disruption of the left middle temporal gyrus (MTG). In a randomized single‐blind within‐subject experiment, we recorded the electroencephalogram from 16 healthy participants during a context‐driven picture‐naming task. Prior to the task, the left MTG was perturbed with real continuous theta‐burst stimulation (cTBS) or sham stimulation. During the task, participants read lead‐in sentences creating a constraining (e.g., “The farmer milks the”) or nonconstraining context (e.g., “The farmer buys the”). The last word was shown as a picture that participants had to name (e.g., “cow”). Replicating behavioral studies, participants were overall faster in naming pictures following a constraining relative to a nonconstraining context, but this effect did not differ between real and sham cTBS. In contrast, real cTBS increased overall error rates compared to sham cTBS. In line with previous studies, we observed a decrease in alpha‐beta (8–24 Hz) oscillatory power for constraining relative to nonconstraining contexts over left temporal–parietal cortex after participants received sham cTBS. However, following real cTBS, this decrease extended toward left prefrontal regions associated with both domain‐general and domain‐specific control mechanisms. Our findings provide evidence that immediately after perturbing the left MTG, the lexical‐semantic network is able to quickly reconfigure, also recruiting domain‐general regions.
Collapse
Affiliation(s)
- Jana Klaus
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Helmholtz Institute, Experimental Psychology, Utrecht University, Utrecht, Netherlands
| | - Dennis J L G Schutter
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands.,Helmholtz Institute, Experimental Psychology, Utrecht University, Utrecht, Netherlands
| | - Vitória Piai
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands.,Donders Centre for Medical Neuroscience, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
60
|
Lemée JM, Berro DH, Bernard F, Chinier E, Leiber LM, Menei P, Ter Minassian A. Resting-state functional magnetic resonance imaging versus task-based activity for language mapping and correlation with perioperative cortical mapping. Brain Behav 2019; 9:e01362. [PMID: 31568681 PMCID: PMC6790308 DOI: 10.1002/brb3.1362] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 05/19/2019] [Accepted: 06/24/2019] [Indexed: 12/17/2022] Open
Abstract
INTRODUCTION Preoperative language mapping using functional magnetic resonance imaging (fMRI) aims to identify eloquent areas in the vicinity of surgically resectable brain lesions. fMRI methodology relies on the blood-oxygen-level-dependent (BOLD) analysis to identify brain language areas. Task-based fMRI studies the BOLD signal increase in brain areas during a language task to identify brain language areas, which requires patients' cooperation, whereas resting-state fMRI (rsfMRI) allows identification of functional networks without performing any explicit task through the analysis of the synchronicity of spontaneous BOLD signal oscillation between brain areas. The aim of this study was to compare preoperative language mapping using rsfMRI and task fMRI to cortical mapping (CM) during awake craniotomies. METHODS Fifty adult patients surgically treated for a brain lesion were enrolled. All patients had a presurgical language mapping with both task fMRI and rsfMRI. Identified language networks were compared to perioperative language mapping using electric cortical stimulation. RESULTS Resting-state fMRI was able to detect brain language areas during CM with a sensitivity of 100% compared to 65.6% with task fMRI. However, we were not able to perform a specificity analysis and compare task-based and rest fMRI with our perioperative setting in the current study. In second-order analysis, task fMRI imaging included main nodes of the SN and main areas involved in semantics were identified in rsfMRI. CONCLUSION Resting-state fMRI for presurgical language mapping is easy to implement, allowing the identification of functional brain language network with a greater sensitivity than task-based fMRI, at the cost of some precautions and a lower specificity. Further study is required to compare both the sensitivity and the specificity of the two methods and to evaluate the clinical value of rsfMRI as an alternative tool for the presurgical identification of brain language areas.
Collapse
Affiliation(s)
- Jean-Michel Lemée
- Department of Neurosurgery, University Hospital of Angers, Angers, France.,INSERM CRCINA Équipe 17, Bâtiment IRIS, Angers, France
| | | | - Florian Bernard
- Department of Neurosurgery, University Hospital of Angers, Angers, France.,Angers Medical Faculty, Anatomy Laboratory, Angers, France
| | - Eva Chinier
- Department of Physical Medicine and Rehabilitation, University Hospital of Angers, Nantes, France
| | | | - Philippe Menei
- Department of Neurosurgery, University Hospital of Angers, Angers, France.,INSERM CRCINA Équipe 17, Bâtiment IRIS, Angers, France
| | - Aram Ter Minassian
- Department of Anesthesiology, University Hospital of Angers, Angers, France.,LARIS EA 7315, Image Signal et Sciences du Vivant, Angers Teaching Hospital, Angers, France
| |
Collapse
|
61
|
Gao Y, Zhang J, Wang Q. Robust neural tracking of linguistic units relates to distractor suppression. Eur J Neurosci 2019; 51:641-650. [PMID: 31430411 DOI: 10.1111/ejn.14552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Revised: 08/01/2019] [Accepted: 08/08/2019] [Indexed: 11/30/2022]
Abstract
In a complex auditory scene, speech comprehension involves several stages: for example segregating the target from the background, recognizing syllables and integrating syllables into linguistic units (e.g., words). Although speech segregation is robust as shown by invariant neural tracking to target speech envelope, whether neural tracking to linguistic units is also robust and how this robustness is achieved remain unknown. To investigate these questions, we concurrently recorded neural responses tracking a rhythmic speech stream at its syllabic and word rates, using electroencephalography. Human participants listened to that target speech under a speech or noise distractor at varying signal-to-noise ratios. Neural tracking at the word rate was not as robust as neural tracking at the syllabic rate. Robust neural tracking to target's words was only observed under the speech distractor but not under the noise distractor. Moreover, this robust word tracking correlated with a successful suppression of distractor tracking. Critically, both word tracking and distractor suppression correlated with behavioural comprehension accuracy. In sum, our results suggest that a robust neural tracking of higher-level linguistic units relates to not only the target tracking, but also the distractor suppression.
Collapse
Affiliation(s)
- Yayue Gao
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China.,Department of Psychology, Beihang University, Beijing, China
| | - Jianfeng Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Qian Wang
- Department of Clinical Psychology, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
62
|
Vaden KI, Eckert MA, Dubno JR, Harris KC. Cingulo-opercular adaptive control for younger and older adults during a challenging gap detection task. J Neurosci Res 2019; 98:680-691. [PMID: 31385349 DOI: 10.1002/jnr.24506] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 07/18/2019] [Accepted: 07/19/2019] [Indexed: 11/07/2022]
Abstract
Cingulo-opercular activity is hypothesized to reflect an adaptive control function that optimizes task performance through adjustments in attention and behavior, and outcome monitoring. While auditory perceptual task performance appears to benefit from elevated activity in cingulo-opercular regions of frontal cortex before stimuli are presented, this association appears reduced for older adults compared to younger adults. However, adaptive control function may be limited by difficult task conditions for older adults. An fMRI study was used to characterize adaptive control differences while 15 younger (average age = 24 years) and 15 older adults (average age = 68 years) performed a gap detection in noise task designed to limit age-related differences. During the fMRI study, participants listened to a noise recording and indicated with a button-press whether it contained a gap. Stimuli were presented between sparse fMRI scans (TR = 8.6 s) and BOLD measurements were collected during separate listening and behavioral response intervals. Age-related performance differences were limited by presenting gaps in noise with durations calibrated at or above each participant's detection threshold. Cingulo-opercular BOLD increased significantly throughout listening and behavioral response intervals, relative to a resting baseline. Correct behavioral responses were significantly more likely on trials with elevated pre-stimulus cingulo-opercular BOLD, consistent with an adaptive control framework. Cingulo-opercular adaptive control estimates appeared higher for participants with better gap sensitivity and lower response bias, irrespective of age, which suggests that this mechanism can benefit performance across the lifespan under conditions that limit age-related performance differences.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | - Kelly C Harris
- Hearing Research Program, Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| |
Collapse
|
63
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 309] [Impact Index Per Article: 61.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
64
|
Rosemann S, Thiel CM. The effect of age-related hearing loss and listening effort on resting state connectivity. Sci Rep 2019; 9:2337. [PMID: 30787339 PMCID: PMC6382886 DOI: 10.1038/s41598-019-38816-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 01/10/2019] [Indexed: 12/22/2022] Open
Abstract
Age-related hearing loss is associated with a decrease in hearing abilities for high frequencies. This increases not only the difficulty to understand speech but also the experienced listening effort. Task based neuroimaging studies in normal-hearing and hearing-impaired participants show an increased frontal activation during effortful speech perception in the hearing-impaired. Whether the increased effort in everyday listening in hearing-impaired even impacts functional brain connectivity at rest is unknown. Nineteen normal-hearing and nineteen hearing-impaired participants with mild to moderate hearing loss participated in the study. Hearing abilities, listening effort and resting state functional connectivity were assessed. Our results indicate no differences in functional connectivity between hearing-impaired and normal-hearing participants. Increased listening effort, however, was related to significantly decreased functional connectivity between the dorsal attention network and the precuneus and superior parietal lobule as well as between the auditory and the inferior frontal cortex. We conclude that already mild to moderate age-related hearing loss can impact resting state functional connectivity. It is however not the hearing loss itself but the individually perceived listening effort that relates to functional connectivity changes.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany. .,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany.,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
65
|
Modular reconfiguration of an auditory control brain network supports adaptive listening behavior. Proc Natl Acad Sci U S A 2018; 116:660-669. [PMID: 30587584 PMCID: PMC6329957 DOI: 10.1073/pnas.1815321116] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How do brain networks shape our listening behavior? We here develop and test the hypothesis that, during challenging listening situations, intrinsic brain networks are reconfigured to adapt to the listening demands and thus, to enable successful listening. We find that, relative to a task-free resting state, networks of the listening brain show higher segregation of temporal auditory, ventral attention, and frontal control regions known to be involved in speech processing, sound localization, and effortful listening. Importantly, the relative change in modularity of this auditory control network predicts individuals’ listening success. Our findings shed light on how cortical communication dynamics tune selection and comprehension of speech in challenging listening situations and suggest modularity as the network principle of auditory attention. Speech comprehension in noisy, multitalker situations poses a challenge. Successful behavioral adaptation to a listening challenge often requires stronger engagement of auditory spatial attention and context-dependent semantic predictions. Human listeners differ substantially in the degree to which they adapt behaviorally and can listen successfully under such circumstances. How cortical networks embody this adaptation, particularly at the individual level, is currently unknown. We here explain this adaptation from reconfiguration of brain networks for a challenging listening task (i.e., a linguistic variant of the Posner paradigm with concurrent speech) in an age-varying sample of n = 49 healthy adults undergoing resting-state and task fMRI. We here provide evidence for the hypothesis that more successful listeners exhibit stronger task-specific reconfiguration (hence, better adaptation) of brain networks. From rest to task, brain networks become reconfigured toward more localized cortical processing characterized by higher topological segregation. This reconfiguration is dominated by the functional division of an auditory and a cingulo-opercular module and the emergence of a conjoined auditory and ventral attention module along bilateral middle and posterior temporal cortices. Supporting our hypothesis, the degree to which modularity of this frontotemporal auditory control network is increased relative to resting state predicts individuals’ listening success in states of divided and selective attention. Our findings elucidate how fine-tuned cortical communication dynamics shape selection and comprehension of speech. Our results highlight modularity of the auditory control network as a key organizational principle in cortical implementation of auditory spatial attention in challenging listening situations.
Collapse
|
66
|
Billings CJ, Madsen BM. A perspective on brain-behavior relationships and effects of age and hearing using speech-in-noise stimuli. Hear Res 2018; 369:90-102. [PMID: 29661615 PMCID: PMC6636926 DOI: 10.1016/j.heares.2018.03.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 03/06/2018] [Accepted: 03/28/2018] [Indexed: 10/17/2022]
Abstract
Understanding speech in background noise is often more difficult for individuals who are older and have hearing impairment than for younger, normal-hearing individuals. In fact, speech-understanding abilities among older individuals with hearing impairment varies greatly. Researchers have hypothesized that some of that variability can be explained by how the brain encodes speech signals in the presence of noise, and that brain measures may be useful for predicting behavioral performance in difficult-to-test patients. In a series of experiments, we have explored the effects of age and hearing impairment in both brain and behavioral domains with the goal of using brain measures to improve our understanding of speech-in-noise difficulties. The behavioral measures examined showed effect sizes for hearing impairment that were 6-10 dB larger than the effects of age when tested in steady-state noise, whereas electrophysiological age effects were similar in magnitude to those of hearing impairment. Both age and hearing status influence neural responses to speech as well as speech understanding in background noise. These effects can in turn be modulated by other factors, such as the characteristics of the background noise itself. Finally, the use of electrophysiology to predict performance on receptive speech-in-noise tasks holds promise, demonstrating root-mean-square prediction errors as small as 1-2 dB. An important next step in this field of inquiry is to sample the aging and hearing impairment variables continuously (rather than categorically) - across the whole lifespan and audiogram - to improve effect estimates.
Collapse
Affiliation(s)
- Curtis J Billings
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, 3710 SW US Veterans Hospital Road (NCRAR), Portland, OR 97239, USA; Department of Otolaryngology, Oregon Health & Science University, 3181 SW Sam Jackson Park Road, Portland, OR 97239, USA.
| | - Brandon M Madsen
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, 3710 SW US Veterans Hospital Road (NCRAR), Portland, OR 97239, USA
| |
Collapse
|
67
|
Donohew L, DiBartolo M, Zhu X, Benca C, Lorch E, Noar SM, Kelly TH, Joseph JE. Communicating with Sensation Seekers: An fMRI Study of Neural Responses to Antidrug Public Service Announcements. HEALTH COMMUNICATION 2018; 33:1004-1012. [PMID: 28622027 PMCID: PMC6190582 DOI: 10.1080/10410236.2017.1331185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This study examined the neural basis of processing high- and low-message sensation value (MSV) antidrug public service announcements (PSAs) in high (HSS) and low sensation seekers (LSS) using fMRI. HSS more strongly engaged the salience network when processing PSAs (versus LSS), suggesting that high-MSV PSAs attracted their attention. HSS and LSS participants who engaged higher level cognitive processing regions reported that the PSAs were more convincing and believable and recalled the PSAs better immediately after testing. In contrast, HSS and LSS participants who strongly engaged visual attention regions for viewing PSAs reported lower personal relevance. These findings provide neurobiological evidence that high-MSV content is salient to HSS, a primary target group for antidrug messages, and additional cognitive processing is associated with higher perceived message effectiveness.
Collapse
Affiliation(s)
| | | | - Xun Zhu
- Department of Neurosciences, Medical University of South Carolina
| | - Chelsie Benca
- Department of Neurosciences, Medical University of South Carolina
| | | | - Seth M. Noar
- Department of Communication, University of Kentucky
| | | | - Jane E. Joseph
- Department of Neurosciences, Medical University of South Carolina
| |
Collapse
|
68
|
Leon M, Woo C. Environmental Enrichment and Successful Aging. Front Behav Neurosci 2018; 12:155. [PMID: 30083097 PMCID: PMC6065351 DOI: 10.3389/fnbeh.2018.00155] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 07/04/2018] [Indexed: 12/18/2022] Open
Abstract
The human brain sustains a slow but progressive decline in function as it ages and these changes are particularly profound in cognitive processing. A potential contributor to this deterioration is the gradual decline in the functioning of multiple sensory systems and the effects they have on areas of the brain that mediate cognitive function. In older adults, diminished capacity is typically observed in the visual, auditory, masticatory, olfactory, and motor systems, and these age-related declines are associated with both a decline in cognitive proficiency, and a loss of neurons in regions of the brain. We will review how the loss of hearing, vision, mastication skills, olfactory impairment, and motoric decline accompany cognitive loss, and how improved functioning of these systems may aid in the restoration of the cognitive abilities in older adults. The human brain appears to require a great deal of stimulation to maintain its cognitive efficacy as people age and environmental enrichment may aid in its maintenance and recovery.
Collapse
Affiliation(s)
- Michael Leon
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA, United States
| | - Cynthia Woo
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
69
|
Differences in Hearing Acuity among "Normal-Hearing" Young Adults Modulate the Neural Basis for Speech Comprehension. eNeuro 2018; 5:eN-NWR-0263-17. [PMID: 29911176 PMCID: PMC6001266 DOI: 10.1523/eneuro.0263-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 04/17/2018] [Accepted: 04/18/2018] [Indexed: 12/11/2022] Open
Abstract
In this paper, we investigate how subtle differences in hearing acuity affect the neural systems supporting speech processing in young adults. Auditory sentence comprehension requires perceiving a complex acoustic signal and performing linguistic operations to extract the correct meaning. We used functional MRI to monitor human brain activity while adults aged 18–41 years listened to spoken sentences. The sentences varied in their level of syntactic processing demands, containing either a subject-relative or object-relative center-embedded clause. All participants self-reported normal hearing, confirmed by audiometric testing, with some variation within a clinically normal range. We found that participants showed activity related to sentence processing in a left-lateralized frontotemporal network. Although accuracy was generally high, participants still made some errors, which were associated with increased activity in bilateral cingulo-opercular and frontoparietal attention networks. A whole-brain regression analysis revealed that activity in a right anterior middle frontal gyrus (aMFG) component of the frontoparietal attention network was related to individual differences in hearing acuity, such that listeners with poorer hearing showed greater recruitment of this region when successfully understanding a sentence. The activity in right aMFGs for listeners with poor hearing did not differ as a function of sentence type, suggesting a general mechanism that is independent of linguistic processing demands. Our results suggest that even modest variations in hearing ability impact the systems supporting auditory speech comprehension, and that auditory sentence comprehension entails the coordination of a left perisylvian network that is sensitive to linguistic variation with an executive attention network that responds to acoustic challenge.
Collapse
|
70
|
Dopaminergic modulation of hemodynamic signal variability and the functional connectome during cognitive performance. Neuroimage 2018; 172:341-356. [DOI: 10.1016/j.neuroimage.2018.01.048] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 01/15/2018] [Accepted: 01/18/2018] [Indexed: 11/19/2022] Open
|
71
|
Rosemann S, Thiel CM. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. Neuroimage 2018; 175:425-437. [PMID: 29655940 DOI: 10.1016/j.neuroimage.2018.04.023] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 03/09/2018] [Accepted: 04/09/2018] [Indexed: 11/19/2022] Open
Abstract
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
72
|
Richards TL, Berninger VW, Yagle K, Abbott RD, Peterson D. Brain's functional network clustering coefficient changes in response to instruction (RTI) in students with and without reading disabilities: Multi-leveled reading brain's RTI. COGENT PSYCHOLOGY 2018; 5. [PMID: 29610767 PMCID: PMC5877472 DOI: 10.1080/23311908.2018.1424680] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
In students in grades 4 to 9 (22 males, 20 females), two reading disability groups-dyslexia (n = 20) or oral and written language learning disability (OWL LD) (n = 6)-were compared to each other and two kinds of control groups-typical readers (n = 6) or dysgraphia (n = 10) on word reading/spelling skills and fMRI imaging before and after completing 18 computerized reading lessons. Mixed ANOVAs showed significant time effects on repeated measures within participants and between groups effects on three behavioral markers of reading disabilities-word reading/spelling: All groups improved on the three behavioral measures, but those without disabilities remained higher than those with reading disabilities. On fMRI reading tasks, analyzed for graph theory derived clustering coefficients within a neural network involved in cognitive control functions, on a word level task the time × group interaction was significant in right medial cingulate; on a syntax level task the time × group interaction was significant in left superior frontal and left inferior frontal gyri; and on a multi-sentence text level task the time × group interaction was significant in right middle frontal gyrus. Three white matter-gray matter correlations became significant only after reading instruction: axial diffusivity in left superior frontal region with right inferior frontal gyrus during word reading judgments; mean diffusivity in left superior corona radiata with left middle frontal gyrus during sentence reading judgments; and mean diffusivity in left anterior corona radiata with right middle frontal gyrus during multi-sentence reading judgments. Significance of results for behavioral and brain response to reading instruction (RTI) is discussed.
Collapse
Affiliation(s)
- Todd L Richards
- Department of Radiology, Integrated Brain Imaging Center, University of Washington, Seattle, WA, USA
| | - Virginia W Berninger
- Learning Sciences and Human Development, University of Washington, Seattle, WA, USA
| | - Kevin Yagle
- Department of Radiology, Integrated Brain Imaging Center, University of Washington, Seattle, WA, USA
| | - Robert D Abbott
- Educational Statistics and Measurement, University of Washington, Seattle, WA, USA
| | - Dan Peterson
- Department of Radiology, Integrated Brain Imaging Center, University of Washington, Seattle, WA, USA
| |
Collapse
|
73
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
74
|
Alain C, Du Y, Bernstein LJ, Barten T, Banai K. Listening under difficult conditions: An activation likelihood estimation meta-analysis. Hum Brain Mapp 2018. [PMID: 29536592 DOI: 10.1002/hbm.24031] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The brain networks supporting speech identification and comprehension under difficult listening conditions are not well specified. The networks hypothesized to underlie effortful listening include regions responsible for executive control. We conducted meta-analyses of auditory neuroimaging studies to determine whether a common activation pattern of the frontal lobe supports effortful listening under different speech manipulations. Fifty-three functional neuroimaging studies investigating speech perception were divided into three independent Activation Likelihood Estimate analyses based on the type of speech manipulation paradigm used: Speech-in-noise (SIN, 16 studies, involving 224 participants); spectrally degraded speech using filtering techniques (15 studies involving 270 participants); and linguistic complexity (i.e., levels of syntactic, lexical and semantic intricacy/density, 22 studies, involving 348 participants). Meta-analysis of the SIN studies revealed higher effort was associated with activation in left inferior frontal gyrus (IFG), left inferior parietal lobule, and right insula. Studies using spectrally degraded speech demonstrated increased activation of the insula bilaterally and the left superior temporal gyrus (STG). Studies manipulating linguistic complexity showed activation in the left IFG, right middle frontal gyrus, left middle temporal gyrus and bilateral STG. Planned contrasts revealed left IFG activation in linguistic complexity studies, which differed from activation patterns observed in SIN or spectral degradation studies. Although there were no significant overlap in prefrontal activation across these three speech manipulation paradigms, SIN and spectral degradation showed overlapping regions in left and right insula. These findings provide evidence that there is regional specialization within the left IFG and differential executive networks underlie effortful listening.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Lori J Bernstein
- Department of Supportive Care, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.,Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Thijs Barten
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
75
|
Chiarello C, Vaden KI, Eckert MA. Orthographic influence on spoken word identification: Behavioral and fMRI evidence. Neuropsychologia 2018; 111:103-111. [PMID: 29371094 PMCID: PMC5866781 DOI: 10.1016/j.neuropsychologia.2018.01.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 12/20/2017] [Accepted: 01/21/2018] [Indexed: 10/18/2022]
Abstract
The current study investigated behavioral and neuroimaging evidence for orthographic influences on auditory word identification. To assess such influences, the proportion of similar sounding words (i.e. phonological neighbors) that were also spelled similarly (i.e., orthographic neighbors) was computed for each auditorily presented word as the Orthographic-to-Phonological Overlap Ratio (OPOR). Speech intelligibility was manipulated by presenting monosyllabic words in multi-talker babble at two signal-to-noise ratios: + 3 and + 10 dB SNR. Identification rates were lower for high overlap words in the challenging + 3 dB SNR condition. In addition, BOLD contrast increased with OPOR at the more difficult SNR, and decreased with OPOR under more favorable SNR conditions. Both voxel-based and region of interest analyses demonstrated robust effects of OPOR in several cingulo-opercular regions. However, contrary to prior theoretical accounts, no task-related activity was observed in posterior regions associated with phonological or orthographic processing. We suggest that, when processing is difficult, orthographic-to-phonological feature overlap increases the availability of competing responses, which then requires additional support from domain general performance systems in order to produce a single response.
Collapse
Affiliation(s)
- Christine Chiarello
- Department of Psychology, University of California, Riverside, CA 92521, United States.
| | | | | |
Collapse
|
76
|
Bourguignon NJ, Ohashi H, Nguyen D, Gracco VL. The neural dynamics of competition resolution for language production in the prefrontal cortex. Hum Brain Mapp 2018; 39:1391-1402. [PMID: 29265695 PMCID: PMC5807142 DOI: 10.1002/hbm.23927] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 11/14/2017] [Accepted: 12/11/2017] [Indexed: 12/25/2022] Open
Abstract
Previous research suggests a pivotal role of the prefrontal cortex (PFC) in word selection during tasks of confrontation naming (CN) and verb generation (VG), both of which feature varying degrees of competition between candidate responses. However, discrepancies in prefrontal activity have also been reported between the two tasks, in particular more widespread and intense activation in VG extending into (left) ventrolateral PFC, the functional significance of which remains unclear. We propose that these variations reflect differences in competition resolution processes tied to distinct underlying lexico-semantic operations: Although CN involves selecting lexical entries out of limited sets of alternatives, VG requires exploration of possible semantic relations not readily evident from the object itself, requiring prefrontal areas previously shown to be recruited in top-down retrieval of information from lexico-semantic memory. We tested this hypothesis through combined independent component analysis of functional imaging data and information-theoretic measurements of variations in selection competition associated with participants' performance in overt CN and VG tasks. Selection competition during CN engaged the anterior insula and surrounding opercular tissue, while competition during VG recruited additional activity of left ventrolateral PFC. These patterns remained after controlling for participants' speech onset latencies indicative of possible task differences in mental effort. These findings have implications for understanding the neural-computational dynamics of cognitive control in language production and how it relates to the functional architecture of adaptive behavior.
Collapse
Affiliation(s)
| | | | - Don Nguyen
- Centre for Research on Brain, Language and MusicMcGill UniversityMontrealCanada
| | - Vincent L. Gracco
- Haskins LaboratoriesNew HavenConnecticut
- Centre for Research on Brain, Language and MusicMcGill UniversityMontrealCanada
- School of Communication Sciences and DisordersMcGill UniversityMontrealCanada
| |
Collapse
|
77
|
Drijvers L, Özyürek A, Jensen O. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension. Hum Brain Mapp 2018; 39:2075-2087. [PMID: 29380945 PMCID: PMC5947738 DOI: 10.1002/hbm.23987] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 01/09/2018] [Accepted: 01/19/2018] [Indexed: 11/10/2022] Open
Abstract
During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level.
Collapse
Affiliation(s)
- Linda Drijvers
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT, Nijmegen, The Netherlands.,Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR, Nijmegen, The Netherlands
| | - Asli Özyürek
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT, Nijmegen, The Netherlands.,Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR, Nijmegen, The Netherlands.,Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen, The Netherlands
| | - Ole Jensen
- School of Psychology, Centre for Human Brain Health, University of Birmingham, Hills Building, Birmingham, B15 2TT, United Kingdom
| |
Collapse
|
78
|
Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions. Ear Hear 2018; 37 Suppl 1:101S-10S. [PMID: 27355759 DOI: 10.1097/aud.0000000000000300] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Collapse
|
79
|
Rowland SC, Hartley DEH, Wiggins IM. Listening in Naturalistic Scenes: What Can Functional Near-Infrared Spectroscopy and Intersubject Correlation Analysis Tell Us About the Underlying Brain Activity? Trends Hear 2018; 22:2331216518804116. [PMID: 30345888 PMCID: PMC6198387 DOI: 10.1177/2331216518804116] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Revised: 08/17/2018] [Accepted: 09/06/2018] [Indexed: 12/24/2022] Open
Abstract
Listening to speech in the noisy conditions of everyday life can be effortful, reflecting the increased cognitive workload involved in extracting meaning from a degraded acoustic signal. Studying the underlying neural processes has the potential to provide mechanistic insight into why listening is effortful under certain conditions. In a move toward studying listening effort under ecologically relevant conditions, we used the silent and flexible neuroimaging technique functional near-infrared spectroscopy (fNIRS) to examine brain activity during attentive listening to speech in naturalistic scenes. Thirty normally hearing participants listened to a series of narratives continuously varying in acoustic difficulty while undergoing fNIRS imaging. Participants then listened to another set of closely matched narratives and rated perceived effort and intelligibility for each scene. As expected, self-reported effort generally increased with worsening signal-to-noise ratio. After controlling for better-ear signal-to-noise ratio, perceived effort was greater in scenes that contained competing speech than in those that did not, potentially reflecting an additional cognitive cost of overcoming informational masking. We analyzed the fNIRS data using intersubject correlation, a data-driven approach suitable for analyzing data collected under naturalistic conditions. Significant intersubject correlation was seen in the bilateral auditory cortices and in a range of channels across the prefrontal cortex. The involvement of prefrontal regions is consistent with the notion that higher order cognitive processes are engaged during attentive listening to speech in complex real-world conditions. However, further research is needed to elucidate the relationship between perceived listening effort and activity in these extended cortical networks.
Collapse
Affiliation(s)
- Stephen C. Rowland
- National Institute for Health Research Nottingham Biomedical Research Centre, UK
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
| | - Douglas E. H. Hartley
- National Institute for Health Research Nottingham Biomedical Research Centre, UK
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, UK
- Nottingham University Hospitals NHS Trust, Queens Medical Centre, UK
| | - Ian M. Wiggins
- National Institute for Health Research Nottingham Biomedical Research Centre, UK
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, UK
| |
Collapse
|
80
|
Investigating the role of temporal lobe activation in speech perception accuracy with normal hearing adults: An event-related fNIRS study. Neuropsychologia 2017; 106:31-41. [PMID: 28888891 DOI: 10.1016/j.neuropsychologia.2017.09.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 08/29/2017] [Accepted: 09/04/2017] [Indexed: 12/14/2022]
Abstract
Functional near infrared spectroscopy (fNIRS) is a safe, non-invasive, relatively quiet imaging technique that is tolerant of movement artifact making it uniquely ideal for the assessment of hearing mechanisms. Previous research demonstrates the capacity for fNIRS to detect cortical changes to varying speech intelligibility, revealing a positive relationship between cortical activation amplitude and speech perception score. In the present study, we use an event-related design to investigate the hemodynamic response in the temporal lobe across different listening conditions. We presented participants with a speech recognition task using sentences in quiet, sentences in noise, and vocoded sentences. Hemodynamic responses were examined across conditions and then compared when speech perception was accurate compared to when speech perception was inaccurate in the context of noisy speech. Repeated measures, two-way ANOVAs revealed that the speech in noise condition (-2.8dB signal-to-noise ratio/SNR) demonstrated significantly greater activation than the easier listening conditions on multiple channels bilaterally. Further analyses comparing correct recognition trials to incorrect recognition trials (during the presentation phase of the trial) revealed that activation was significantly greater during correct trials. Lastly, during the repetition phase of the trial, where participants correctly repeated the sentence, the hemodynamic response demonstrated significantly higher deoxyhemoglobin than oxyhemoglobin, indicating a difference between the effects of perception and production on the cortical response. Using fNIRS, the present study adds meaningful evidence to the body of knowledge that describes the brain/behavior relationship related to speech perception.
Collapse
|
81
|
Conant LL, Liebenthal E, Desai A, Binder JR. The relationship between maternal education and the neural substrates of phoneme perception in children: Interactions between socioeconomic status and proficiency level. BRAIN AND LANGUAGE 2017; 171:14-22. [PMID: 28437659 PMCID: PMC5602599 DOI: 10.1016/j.bandl.2017.03.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/17/2017] [Accepted: 03/31/2017] [Indexed: 05/25/2023]
Abstract
Relationships between maternal education (ME) and both behavioral performances and brain activation during the discrimination of phonemic and nonphonemic sounds were examined using fMRI in children with different levels of phoneme categorization proficiency (CP). Significant relationships were found between ME and intellectual functioning and vocabulary, with a trend for phonological awareness. A significant interaction between CP and ME was seen for nonverbal reasoning abilities. In addition, fMRI analyses revealed a significant interaction between CP and ME for phonemic discrimination in left prefrontal cortex. Thus, ME was associated with differential patterns of both neuropsychological performance and brain activation contingent on the level of CP. These results highlight the importance of examining SES effects at different proficiency levels. The pattern of results may suggest the presence of neurobiological differences in the children with low CP that affect the nature of relationships with ME.
Collapse
Affiliation(s)
- Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA.
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anjali Desai
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
82
|
Stimulating Multiple-Demand Cortex Enhances Vocabulary Learning. J Neurosci 2017; 37:7606-7618. [PMID: 28676576 PMCID: PMC5551060 DOI: 10.1523/jneurosci.3857-16.2017] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2016] [Revised: 04/20/2017] [Accepted: 04/27/2017] [Indexed: 01/04/2023] Open
Abstract
It is well established that networks within multiple-demand cortex (MDC) become active when diverse skills and behaviors are being learnt. However, their causal role in learning remains to be established. In the present study, we first performed functional magnetic resonance imaging on healthy female and male human participants to confirm that MDC was most active in the initial stages of learning a novel vocabulary, consisting of pronounceable nonwords (pseudowords), each associated with a picture of a real object. We then examined, in healthy female and male human participants, whether repetitive transcranial magnetic stimulation of a frontal midline node of the cingulo-opercular MDC affected learning rates specifically during the initial stages of learning. We report that stimulation of this node, but not a control brain region, substantially improved both accuracy and response times during the earliest stage of learning pseudoword–object associations. This stimulation had no effect on the processing of established vocabulary, tested by the accuracy and response times when participants decided whether a real word was accurately paired with a picture of an object. These results provide evidence that noninvasive stimulation to MDC nodes can enhance learning rates, thereby demonstrating their causal role in the learning process. We propose that this causal role makes MDC candidate target for experimental therapeutics; for example, in stroke patients with aphasia attempting to reacquire a vocabulary. SIGNIFICANCE STATEMENT Learning a task involves the brain system within which that specific task becomes established. Therefore, successfully learning a new vocabulary establishes the novel words in the language system. However, there is evidence that in the early stages of learning, networks within multiple-demand cortex (MDC), which control higher cognitive functions, such as working memory, attention, and monitoring of performance, become active. This activity declines once the task is learnt. The present study demonstrated that a node within MDC, located in midline frontal cortex, becomes active during the early stage of learning a novel vocabulary. Importantly, noninvasive brain stimulation of this node improved performance during this stage of learning. This observation demonstrated that MDC activity is important for learning.
Collapse
|
83
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Cingulo-opercular activity affects incidental memory encoding for speech in noise. Neuroimage 2017. [PMID: 28624645 DOI: 10.1016/j.neuroimage.2017.06.028] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, United States.
| |
Collapse
|
84
|
Cognitive persistence: Development and validation of a novel measure from the Wisconsin Card Sorting Test. Neuropsychologia 2017; 102:95-108. [PMID: 28552783 DOI: 10.1016/j.neuropsychologia.2017.05.027] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Revised: 05/23/2017] [Accepted: 05/25/2017] [Indexed: 12/30/2022]
Abstract
The Wisconsin Card Sorting Test (WCST) has long been used as a neuropsychological assessment of executive function abilities, in particular, cognitive flexibility or "set-shifting". Recent advances in scoring the task have helped to isolate specific WCST performance metrics that index set-shifting abilities and have improved our understanding of how prefrontal and parietal cortex contribute to set-shifting. We present evidence that the ability to overcome task difficulty to achieve a goal, or "cognitive persistence", is another important prefrontal function that is characterized by the WCST and that can be differentiated from efficient set-shifting. This novel measure of cognitive persistence was developed using the WCST-64 in an adult lifespan sample of 230 participants. The measure was validated using individual variation in cingulo-opercular cortex function in a sub-sample of older adults who had completed a challenging speech recognition in noise fMRI task. Specifically, older adults with higher cognitive persistence were more likely to demonstrate word recognition benefit from cingulo-opercular activity. The WCST-derived cognitive persistence measure can be used to disentangle neural processes involved in set-shifting from those involved in persistence.
Collapse
|
85
|
Richards TL, Berninger VW, Yagle KJ, Abbott RD, Peterson DJ. Changes in DTI Diffusivity and fMRI Connectivity Cluster Coefficients for Students with and without Specific Learning Disabilities In Written Language: Brain's Response to Writing Instruction. JOURNAL OF NATURE AND SCIENCE 2017; 3:e350. [PMID: 28670621 PMCID: PMC5488805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Before and after computerized writing instruction, participants completed assessment with normed measures and DTI and fMRI connectivity scanning. Evidence-based differential diagnosis was used at time 1 to assign them to diagnostic groups: typical oral and written language (n=6), dysgraphia (impaired handwriting, n=10), dyslexia (impaired word spelling and reading, n=20), and OWL LD (impaired syntax construction, n=6). The instruction was aimed at subword letter writing, word spelling, and syntax composing. With p <.001 to control for multiple comparisons, the following significant findings were observed in academic achievement, DTI (radial diffusivity RD, axial diffusivity AD, and mean diffusivity MD), and graph cluster coefficients for fMRI connectivity. A time effect (pre-post intervention increase) in handwriting and oral construction of sentence syntax was significant; but diagnostic group effects were significant for dictated spelling and creation of word-specific spellings, with the dyslexia and OWL LD groups scoring lower than the typical control or dysgraphia groups. For RD a time effect occurred in anterior corona radiata and superior frontal. For AD a time effect occurred in superior corona radiata, superior frontal region, middle frontal gyrus, and superior longitudinal fasciculus. For MD a time effect occurred in the same regions as AD and also anterior coronal radiata. A diagnostic group effect occurred for graph cluster coefficients in fMRI connectivity while writing the next letter in alphabet from memory; but the diagnostic group × time interaction was not significant. The only significant time × treatment interaction occurred in right inferior frontal gyrus associated with orthographic coding. Compared to time 1, cluster coefficients increased at time 2 in all groups except in the dysgraphia group in which they decreased. Implications of results are discussed for response to instruction (RTI) versus evidence-based differential diagnosis for identifying students with SLDs in writing which may be best understood at both the behavioral and brain levels of analysis.
Collapse
Affiliation(s)
- Todd L. Richards
- Integrated Brain Imaging Center, Department of Radiology, University of Washington, Seattle, WA, USA
| | | | - Kevin J. Yagle
- Integrated Brain Imaging Center, Department of Radiology, University of Washington, Seattle, WA, USA
| | - Robert D. Abbott
- Educational Statistics and Measurement, University of Washington, Seattle, WA, USA
| | - Daniel J. Peterson
- Integrated Brain Imaging Center, Department of Radiology, University of Washington, Seattle, WA, USA
| |
Collapse
|
86
|
Hsu NS, Jaeggi SM, Novick JM. A common neural hub resolves syntactic and non-syntactic conflict through cooperation with task-specific networks. BRAIN AND LANGUAGE 2017; 166:63-77. [PMID: 28110105 PMCID: PMC5293615 DOI: 10.1016/j.bandl.2016.12.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Revised: 11/29/2016] [Accepted: 12/18/2016] [Indexed: 05/09/2023]
Abstract
Regions within the left inferior frontal gyrus (LIFG) have simultaneously been implicated in syntactic processing and cognitive control. Accounts attempting to unify LIFG's function hypothesize that, during comprehension, cognitive control resolves conflict between incompatible representations of sentence meaning. Some studies demonstrate co-localized activity within LIFG for syntactic and non-syntactic conflict resolution, suggesting domain-generality, but others show non-overlapping activity, suggesting domain-specific cognitive control and/or regions that respond uniquely to syntax. We propose however that examining exclusive activation sites for certain contrasts creates a false dichotomy: both domain-general and domain-specific neural machinery must coordinate to facilitate conflict resolution across domains. Here, subjects completed four diverse tasks involving conflict -one syntactic, three non-syntactic- while undergoing fMRI. Though LIFG consistently activated within individuals during conflict processing, functional connectivity analyses revealed task-specific coordination with distinct brain networks. Thus, LIFG may function as a conflict-resolution "hub" that cooperates with specialized neural systems according to information content.
Collapse
Affiliation(s)
- Nina S Hsu
- Department of Psychology, University of Maryland, College Park, USA; Center for Advanced Study of Language, University of Maryland, College Park, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, USA; Department of Hearing and Speech Sciences, University of Maryland, College Park, USA.
| | - Susanne M Jaeggi
- School of Education, University of California, Irvine, USA; Department of Cognitive Sciences, University of California, Irvine, USA.
| | - Jared M Novick
- Center for Advanced Study of Language, University of Maryland, College Park, USA; Program in Neuroscience and Cognitive Science, University of Maryland, College Park, USA; Department of Hearing and Speech Sciences, University of Maryland, College Park, USA.
| |
Collapse
|
87
|
Richards TL, Abbott RD, Yagle K, Peterson D, Raskind W, Berninger VW. Self-government of complex reading and writing brains informed by cingulo-opercular network for adaptive control and working memory components for language learning. JOURNAL OF SYSTEMS AND INTEGRATIVE NEUROSCIENCE 2017; 3. [PMID: 29576874 DOI: 10.15761/jsin.1000173] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
To understand mental self-government of the developing reading and writing brain, correlations of clustering coefficients on fMRI reading or writing tasks with BASC 2 Adaptivity ratings (time 1 only) or working memory components (time 1 before and time 2 after instruction previously shown to improve achievement and change magnitude of fMRI connectivity) were investigated in 39 students in grades 4 to 9 who varied along a continuum of reading and writing skills. A Philips 3T scanner measured connectivity during six leveled fMRI reading tasks (subword-letters and sounds, word-word-specific spellings or affixed words, syntax comprehension-with and without homonym foils or with and without affix foils, and text comprehension) and three fMRI writing tasks-writing next letter in alphabet, adding missing letter in word spelling, and planning for composing. The Brain Connectivity Toolbox generated clustering coefficients based on the cingulo-opercular (CO) network; after controlling for multiple comparisons and movement, significant fMRI connectivity clustering coefficients for CO were identified in 8 brain regions bilaterally (cingulate gyrus, superior frontal gyrus, middle frontal gyrus, inferior frontal gyrus, superior temporal gyrus, insula, cingulum-cingulate gyrus, and cingulum-hippocampus). BASC2 Parent Ratings for Adaptivity were correlated with CO clustering coefficients on three reading tasks (letter-sound, word affix judgments and sentence comprehension) and one writing task (writing next letter in alphabet). Before instruction, each behavioral working memory measure (phonology, orthography, morphology, and syntax coding, phonological and orthographic loops for integrating internal language and output codes, and supervisory focused and switching attention) correlated significantly with at least one CO clustering coefficient. After instruction, the patterning of correlations changed with new correlations emerging. Results show that the reading and writing brain's mental government, supported by both CO Adaptive Control and multiple working memory components, had changed in response to instruction during middle childhood/early adolescence.
Collapse
Affiliation(s)
- Todd L Richards
- Integrated Brain Imaging Center, Department of Radiology, University of Washington, Seattle, USA
| | - Robert D Abbott
- Educational Statistics and Measurement, University of Washington, Seattle, USA
| | - Kevin Yagle
- Integrated Brain Imaging Center, Department of Radiology, University of Washington, Seattle, USA
| | - Dan Peterson
- Integrated Brain Imaging Center, Department of Radiology, University of Washington, Seattle, USA
| | - Wendy Raskind
- Medical Genetics, University of Washington, USA.,Psychiatry and Behavioral Sciences, University of Washington, USA
| | - Virginia W Berninger
- Educational Psychology, Learning Sciences and Human Development, University of Washington, Seattle, USA
| |
Collapse
|
88
|
Godwin D, Ji A, Kandala S, Mamah D. Functional Connectivity of Cognitive Brain Networks in Schizophrenia during a Working Memory Task. Front Psychiatry 2017; 8:294. [PMID: 29312020 PMCID: PMC5743938 DOI: 10.3389/fpsyt.2017.00294] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Accepted: 12/11/2017] [Indexed: 11/21/2022] Open
Abstract
Task-based connectivity studies facilitate the understanding of how the brain functions during cognition, which is commonly impaired in schizophrenia (SZ). Our aim was to investigate functional connectivity during a working memory task in SZ. We hypothesized that the task-negative (default mode) network and the cognitive control (frontoparietal) network would show dysconnectivity. Twenty-five SZ patient and 31 healthy control scans were collected using the customized 3T Siemens Skyra MRI scanner, previously used to collect data for the Human Connectome Project. Blood oxygen level dependent signal during the 0-back and 2-back conditions were extracted within a network-based parcelation scheme. Average functional connectivity was assessed within five brain networks: frontoparietal (FPN), default mode (DMN), cingulo-opercular (CON), dorsal attention (DAN), and ventral attention network; as well as between the DMN or FPN and other networks. For within-FPN connectivity, there was a significant interaction between n-back condition and group (p = 0.015), with decreased connectivity at 0-back in SZ subjects compared to controls. FPN-to-DMN connectivity also showed a significant condition × group effect (p = 0.003), with decreased connectivity at 0-back in SZ. Across groups, connectivity within the CON and DAN were increased during the 2-back condition, while DMN connectivity with either CON or DAN were decreased during the 2-back condition. Our findings support the role of the FPN, CON, and DAN in working memory and indicate that the pattern of FPN functional connectivity differs between SZ patients and control subjects during the course of a working memory task.
Collapse
Affiliation(s)
- Douglass Godwin
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Andrew Ji
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Sridhar Kandala
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Daniel Mamah
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| |
Collapse
|
89
|
Eckert MA, Matthews LJ, Dubno JR. Self-Assessed Hearing Handicap in Older Adults With Poorer-Than-Predicted Speech Recognition in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:251-262. [PMID: 28060993 PMCID: PMC5533557 DOI: 10.1044/2016_jslhr-h-16-0011] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 07/07/2016] [Indexed: 05/12/2023]
Abstract
PURPOSE Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. METHOD We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. RESULTS One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. CONCLUSION The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function.
Collapse
|
90
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
91
|
Vaden KI, Kuchinsky SE, Ahlstrom JB, Teubner-Rhodes SE, Dubno JR, Eckert MA. Cingulo-Opercular Function During Word Recognition in Noise for Older Adults with Hearing Loss. Exp Aging Res 2016; 42:67-82. [PMID: 26683042 DOI: 10.1080/0361073x.2016.1108784] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT Adaptive control, reflected by elevated activity in cingulo-opercular brain regions, optimizes performance in challenging tasks by monitoring outcomes and adjusting behavior. For example, cingulo-opercular function benefits trial-level word recognition in noise for normal-hearing adults. Because auditory system deficits may limit the communicative benefit from adaptive control, we examined the extent to which cingulo-opercular engagement supports word recognition in noise for older adults with hearing loss (HL). METHODS Participants were selected to form groups with Less HL (n = 12; mean pure tone threshold, pure tone average [PTA] = 19.2 ± 4.8 dB HL [hearing level]) and More HL (n = 12; PTA = 38.4 ± 4.5 dB HL, 0.25-8 kHz, both ears). A word recognition task was performed with words presented in multitalker babble at +3 or +10 dB signal-to-noise ratios (SNRs) during a sparse acquisition fMRI experiment. The participants were middle-aged and older (ages: 64.1 ± 8.4 years) English speakers with no history of neurological or psychiatric diagnoses. RESULTS Elevated cingulo-opercular activity occurred with increased likelihood of correct word recognition on the next trial (t(23) = 3.28, p = .003), and this association did not differ between hearing loss groups. During trials with word recognition errors, the More HL group exhibited higher blood oxygen level-dependent (BOLD) contrast in occipital and parietal regions compared with the Less HL group. Across listeners, more pronounced cingulo-opercular activity during recognition errors was associated with better overall word recognition performance. CONCLUSION The trial-level word recognition benefit from cingulo-opercular activity was equivalent for both hearing loss groups. When speech audibility and performance levels are similar for older adults with mild to moderate hearing loss, cingulo-opercular adaptive control contributes to word recognition in noise.
Collapse
Affiliation(s)
- Kenneth I Vaden
- a Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Stefanie E Kuchinsky
- b Center for Advanced Study of Language , University of Maryland , College Park , Maryland , USA
| | - Jayne B Ahlstrom
- a Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Susan E Teubner-Rhodes
- a Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Judy R Dubno
- a Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Mark A Eckert
- a Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| |
Collapse
|
92
|
Kuchinsky SE, Vaden KI, Ahlstrom JB, Cute SL, Humes LE, Dubno JR, Eckert MA. Task-Related Vigilance During Word Recognition in Noise for Older Adults with Hearing Loss. Exp Aging Res 2016; 42:50-66. [PMID: 26683041 DOI: 10.1080/0361073x.2016.1108712] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT Vigilance refers to the ability to sustain and adapt attentional focus in response to changing task demands. For older adults with hearing loss, vigilant listening may be particularly effortful and variable across individuals. This study examined the extent to which neural responses to sudden, unexpected changes in task structure (e.g., from rest to word recognition epochs) were related to pupillometry measures of listening effort. METHODS Individual differences in the task-evoked pupil response during word recognition were used to predict functional magnetic resonance imaging (MRI) estimates of neural responses to salient transitions between quiet rest, noisy rest, and word recognition in unintelligible, fluctuating background noise. Participants included 29 older adults (M = 70.2 years old) with hearing loss (pure tone average across all frequencies = 36.1 dB HL [hearing level], SD = 6.7). RESULTS Individuals with a greater average pupil response exhibited a more vigilant pattern of responding on a standardized continuous performance test (response time variability across varying interstimulus intervals r(27) = .38, p = .04). Across participants there was widespread engagement of attention- and sensory-related cortices in response to transitions between blocks of rest and word recognition conditions. Individuals who exhibited larger task-evoked pupil dilation also showed even greater activity in the right primary auditory cortex in response to changes in task structure. CONCLUSION Pupillometric estimates of word recognition effort predicted variation in activity within cortical regions that were responsive to salient changes in the environment for older adults with hearing loss. The results of the current study suggest that vigilant attention is increased amongst older adults who exert greater listening effort.
Collapse
Affiliation(s)
- Stefanie E Kuchinsky
- a Center for Advanced Study of Language , University of Maryland , College Park , Maryland , USA
| | - Kenneth I Vaden
- b Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Jayne B Ahlstrom
- b Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Stephanie L Cute
- b Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Larry E Humes
- c Department of Speech and Hearing Sciences , Indiana University , Bloomington , Indiana , USA
| | - Judy R Dubno
- b Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| | - Mark A Eckert
- b Department of Otolaryngology-Head and Neck Surgery , Medical University of South Carolina , Charleston , South Carolina , USA
| |
Collapse
|
93
|
Does increasing the intelligibility of a competing sound source interfere more with speech comprehension in older adults than it does in younger adults? Atten Percept Psychophys 2016; 78:2655-2677. [DOI: 10.3758/s13414-016-1193-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
94
|
Tremblay KL, Backer KC. Listening and Learning: Cognitive Contributions to the Rehabilitation of Older Adults With and Without Audiometrically Defined Hearing Loss. Ear Hear 2016; 37 Suppl 1:155S-62S. [PMID: 27355765 PMCID: PMC5182072 DOI: 10.1097/aud.0000000000000307] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Here, we describe some of the ways in which aging negatively affects the way sensory input is transduced and processed within the aging brain and how cognitive work is involved when listening to a less-than-perfect signal. We also describe how audiologic rehabilitation, including hearing aid amplification and listening training, is used to reduce the amount of cognitive resources required for effective auditory communication and conclude with an example of how listening effort is being studied in research laboratories for the purpose(s) of informing clinical practice.
Collapse
Affiliation(s)
- Kelly L Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | | |
Collapse
|
95
|
Hearing Impairment and Cognitive Energy: The Framework for Understanding Effortful Listening (FUEL). Ear Hear 2016; 37 Suppl 1:5S-27S. [DOI: 10.1097/aud.0000000000000312] [Citation(s) in RCA: 541] [Impact Index Per Article: 67.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
96
|
Peelle JE, Wingfield A. The Neural Consequences of Age-Related Hearing Loss. Trends Neurosci 2016; 39:486-497. [PMID: 27262177 DOI: 10.1016/j.tins.2016.05.001] [Citation(s) in RCA: 152] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 05/04/2016] [Accepted: 05/09/2016] [Indexed: 01/02/2023]
Abstract
During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here, we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St Louis, St Louis, MO, USA.
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
97
|
Cardin V. Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions. Front Neurosci 2016; 10:199. [PMID: 27242405 PMCID: PMC4862970 DOI: 10.3389/fnins.2016.00199] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 04/22/2016] [Indexed: 11/13/2022] Open
Abstract
Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss.
Collapse
Affiliation(s)
- Velia Cardin
- Department of Experimental Psychology, Deafness, Cognition and Language Research Centre, University College LondonLondon, UK; Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| |
Collapse
|
98
|
Single subject analyses reveal consistent recruitment of frontal operculum in performance monitoring. Neuroimage 2016; 133:266-278. [PMID: 26973171 DOI: 10.1016/j.neuroimage.2016.03.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2015] [Revised: 02/01/2016] [Accepted: 03/02/2016] [Indexed: 02/01/2023] Open
Abstract
There are continuing uncertainties regarding whether performance monitoring recruits the anterior insula (aI) and/or the frontal operculum (fO). The proximity and morphological complexity of these two regions make proper identification and isolation of the loci of activation extremely difficult. The use of group averaging methods in human neuroimaging might contribute to this problem. The result has been heterogeneous labeling of this region as aI, fO, or aI/fO, and a discussion of results oriented towards either cognitive or interoceptive functions depending on labeling. In the present article, we adapted the spatial preprocessing of functional magnetic resonance imaging data to account for group averaging artifacts and performed a subject-by-subject analysis in three performance monitoring tasks. Results show that functional activity related to feedback or action monitoring consistently follows local morphology in this region and demonstrate that the activity is located predominantly in the fO rather than in the aI. From these results, we propose that a full understanding of the respective role of aI and fO would benefit from increased spatial resolution and subject-by-subject analysis.
Collapse
|
99
|
Linking Indices of Tonic Alertness: Resting-State Pupil Dilation and Cingulo-Opercular Neural Activity. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-39955-3_21] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
100
|
Acoustic richness modulates the neural networks supporting intelligible speech processing. Hear Res 2015; 333:108-117. [PMID: 26723103 DOI: 10.1016/j.heares.2015.12.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 12/07/2015] [Accepted: 12/10/2015] [Indexed: 11/20/2022]
Abstract
The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high.
Collapse
|