1
|
Hartwigsen G, Lim JS, Bae HJ, Yu KH, Kuijf HJ, Weaver NA, Biesbroek JM, Kopal J, Bzdok D. Bayesian modelling disentangles language versus executive control disruption in stroke. Brain Commun 2024; 6:fcae129. [PMID: 38707712 PMCID: PMC11069117 DOI: 10.1093/braincomms/fcae129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/06/2024] [Accepted: 04/09/2024] [Indexed: 05/07/2024] Open
Abstract
Stroke is the leading cause of long-term disability worldwide. Incurred brain damage can disrupt cognition, often with persisting deficits in language and executive capacities. Yet, despite their clinical relevance, the commonalities and differences between language versus executive control impairments remain under-specified. To fill this gap, we tailored a Bayesian hierarchical modelling solution in a largest-of-its-kind cohort (1080 patients with stroke) to deconvolve language and executive control with respect to the stroke topology. Cognitive function was assessed with a rich neuropsychological test battery including global cognitive function (tested with the Mini-Mental State Exam), language (assessed with a picture naming task), executive speech function (tested with verbal fluency tasks), executive control functions (Trail Making Test and Digit Symbol Coding Task), visuospatial functioning (Rey Complex Figure), as well as verbal learning and memory function (Soul Verbal Learning). Bayesian modelling predicted interindividual differences in eight cognitive outcome scores three months after stroke based on specific tissue lesion topologies. A multivariate factor analysis extracted four distinct cognitive factors that distinguish left- and right-hemispheric contributions to ischaemic tissue lesions. These factors were labelled according to the neuropsychological tests that had the strongest factor loadings: One factor delineated language and general cognitive performance and was mainly associated with damage to left-hemispheric brain regions in the frontal and temporal cortex. A factor for executive control summarized mental flexibility, task switching and visual-constructional abilities. This factor was strongly related to right-hemispheric brain damage of posterior regions in the occipital cortex. The interplay of language and executive control was reflected in two distinct factors that were labelled as executive speech functions and verbal memory. Impairments on both factors were mainly linked to left-hemispheric lesions. These findings shed light onto the causal implications of hemispheric specialization for cognition; and make steps towards subgroup-specific treatment protocols after stroke.
Collapse
Affiliation(s)
- Gesa Hartwigsen
- Wilhelm Wundt Institute for Psychology, Leipzig University, 04109 Leipzig, Germany
- Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| | - Jae-Sung Lim
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, 05505, South Korea
| | - Hee-Joon Bae
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seoul, 13620, South Korea
| | - Kyung-Ho Yu
- Department of Neurology, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang, 14068, Republic of Korea
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
| | - Nick A Weaver
- Department of Neurology and Neurosurgery, Utrecht Brain Center, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
| | - J Matthijs Biesbroek
- Department of Neurology and Neurosurgery, Utrecht Brain Center, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
- Department of Neurology, Diakonessenhuis Hospital, 3582 KE Utrecht, The Netherlands
| | - Jakub Kopal
- Department of Biomedical Engineering, Faculty of Medicine, McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2BA, Canada
- Mila—Quebec Artificial Intelligence Institute, Montreal, Quebec H2S 3H1, Canada
| | - Danilo Bzdok
- Department of Biomedical Engineering, Faculty of Medicine, McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2BA, Canada
- Mila—Quebec Artificial Intelligence Institute, Montreal, Quebec H2S 3H1, Canada
| |
Collapse
|
2
|
Guichet C, Banjac S, Achard S, Mermillod M, Baciu M. Modeling the neurocognitive dynamics of language across the lifespan. Hum Brain Mapp 2024; 45:e26650. [PMID: 38553863 PMCID: PMC10980845 DOI: 10.1002/hbm.26650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 02/08/2024] [Accepted: 02/26/2024] [Indexed: 04/02/2024] Open
Abstract
Healthy aging is associated with a heterogeneous decline across cognitive functions, typically observed between language comprehension and language production (LP). Examining resting-state fMRI and neuropsychological data from 628 healthy adults (age 18-88) from the CamCAN cohort, we performed state-of-the-art graph theoretical analysis to uncover the neural mechanisms underlying this variability. At the cognitive level, our findings suggest that LP is not an isolated function but is modulated throughout the lifespan by the extent of inter-cognitive synergy between semantic and domain-general processes. At the cerebral level, we show that default mode network (DMN) suppression coupled with fronto-parietal network (FPN) integration is the way for the brain to compensate for the effects of dedifferentiation at a minimal cost, efficiently mitigating the age-related decline in LP. Relatedly, reduced DMN suppression in midlife could compromise the ability to manage the cost of FPN integration. This may prompt older adults to adopt a more cost-efficient compensatory strategy that maintains global homeostasis at the expense of LP performances. Taken together, we propose that midlife represents a critical neurocognitive juncture that signifies the onset of LP decline, as older adults gradually lose control over semantic representations. We summarize our findings in a novel synergistic, economical, nonlinear, emergent, cognitive aging model, integrating connectomic and cognitive dimensions within a complex system perspective.
Collapse
Affiliation(s)
| | - Sonja Banjac
- Université Grenoble Alpes, CNRS LPNC UMR 5105GrenobleFrance
| | - Sophie Achard
- LJK, UMR CNRS 5224, Université Grenoble AlpesGrenobleFrance
| | | | - Monica Baciu
- Université Grenoble Alpes, CNRS LPNC UMR 5105GrenobleFrance
| |
Collapse
|
3
|
Billot A, Kiran S. Disentangling neuroplasticity mechanisms in post-stroke language recovery. BRAIN AND LANGUAGE 2024; 251:105381. [PMID: 38401381 PMCID: PMC10981555 DOI: 10.1016/j.bandl.2024.105381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 11/28/2023] [Accepted: 01/12/2024] [Indexed: 02/26/2024]
Abstract
A major objective in post-stroke aphasia research is to gain a deeper understanding of neuroplastic mechanisms that drive language recovery, with the ultimate goal of enhancing treatment outcomes. Subsequent to recent advances in neuroimaging techniques, we now have the ability to examine more closely how neural activity patterns change after a stroke. However, the way these neural activity changes relate to language impairments and language recovery is still debated. The aim of this review is to provide a theoretical framework to better investigate and interpret neuroplasticity mechanisms underlying language recovery in post-stroke aphasia. We detail two sets of neuroplasticity mechanisms observed at the synaptic level that may explain functional neuroimaging findings in post-stroke aphasia recovery at the network level: feedback-based homeostatic plasticity and associative Hebbian plasticity. In conjunction with these plasticity mechanisms, higher-order cognitive control processes dynamically modulate neural activity in other regions to meet communication demands, despite reduced neural resources. This work provides a network-level neurobiological framework for understanding neural changes observed in post-stroke aphasia and can be used to define guidelines for personalized treatment development.
Collapse
Affiliation(s)
- Anne Billot
- Center for Brain Recovery, Boston University, Boston, USA; Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts, USA; Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Swathi Kiran
- Center for Brain Recovery, Boston University, Boston, USA.
| |
Collapse
|
4
|
Johns MA, Calloway RC, Karunathilake IMD, Decruy LP, Anderson S, Simon JZ, Kuchinsky SE. Attention Mobilization as a Modulator of Listening Effort: Evidence From Pupillometry. Trends Hear 2024; 28:23312165241245240. [PMID: 38613337 PMCID: PMC11015766 DOI: 10.1177/23312165241245240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 03/11/2024] [Accepted: 03/15/2024] [Indexed: 04/14/2024] Open
Abstract
Listening to speech in noise can require substantial mental effort, even among younger normal-hearing adults. The task-evoked pupil response (TEPR) has been shown to track the increased effort exerted to recognize words or sentences in increasing noise. However, few studies have examined the trajectory of listening effort across longer, more natural, stretches of speech, or the extent to which expectations about upcoming listening difficulty modulate the TEPR. Seventeen younger normal-hearing adults listened to 60-s-long audiobook passages, repeated three times in a row, at two different signal-to-noise ratios (SNRs) while pupil size was recorded. There was a significant interaction between SNR, repetition, and baseline pupil size on sustained listening effort. At lower baseline pupil sizes, potentially reflecting lower attention mobilization, TEPRs were more sustained in the harder SNR condition, particularly when attention mobilization remained low by the third presentation. At intermediate baseline pupil sizes, differences between conditions were largely absent, suggesting these listeners had optimally mobilized their attention for both SNRs. Lastly, at higher baseline pupil sizes, potentially reflecting overmobilization of attention, the effect of SNR was initially reversed for the second and third presentations: participants initially appeared to disengage in the harder SNR condition, resulting in reduced TEPRs that recovered in the second half of the story. Together, these findings suggest that the unfolding of listening effort over time depends critically on the extent to which individuals have successfully mobilized their attention in anticipation of difficult listening conditions.
Collapse
Affiliation(s)
- M. A. Johns
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - R. C. Calloway
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - I. M. D. Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - L. P. Decruy
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - S. Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
| | - J. Z. Simon
- Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - S. E. Kuchinsky
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD 20889, USA
| |
Collapse
|
5
|
Anbuhl KL, Diez Castro M, Lee NA, Lee VS, Sanes DH. Cingulate cortex facilitates auditory perception under challenging listening conditions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.10.566668. [PMID: 38014324 PMCID: PMC10680599 DOI: 10.1101/2023.11.10.566668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
We often exert greater cognitive resources (i.e., listening effort) to understand speech under challenging acoustic conditions. This mechanism can be overwhelmed in those with hearing loss, resulting in cognitive fatigue in adults, and potentially impeding language acquisition in children. However, the neural mechanisms that support listening effort are uncertain. Evidence from human studies suggest that the cingulate cortex is engaged under difficult listening conditions, and may exert top-down modulation of the auditory cortex (AC). Here, we asked whether the gerbil cingulate cortex (Cg) sends anatomical projections to the AC that facilitate perceptual performance. To model challenging listening conditions, we used a sound discrimination task in which stimulus parameters were presented in either 'Easy' or 'Hard' blocks (i.e., long or short stimulus duration, respectively). Gerbils achieved statistically identical psychometric performance in Easy and Hard blocks. Anatomical tracing experiments revealed a strong, descending projection from layer 2/3 of the Cg1 subregion of the cingulate cortex to superficial and deep layers of primary and dorsal AC. To determine whether Cg improves task performance under challenging conditions, we bilaterally infused muscimol to inactivate Cg1, and found that psychometric thresholds were degraded for only Hard blocks. To test whether the Cg-to-AC projection facilitates task performance, we chemogenetically inactivated these inputs and found that performance was only degraded during Hard blocks. Taken together, the results reveal a descending cortical pathway that facilitates perceptual performance during challenging listening conditions. Significance Statement Sensory perception often occurs under challenging conditions, such a noisy background or dim environment, yet stimulus sensitivity can remain unaffected. One hypothesis is that cognitive resources are recruited to the task, thereby facilitating perceptual performance. Here, we identify a top-down cortical circuit, from cingulate to auditory cortex in the gerbils, that supports auditory perceptual performance under challenging listening conditions. This pathway is a plausible circuit that supports effortful listening, and may be degraded by hearing loss.
Collapse
|
6
|
Hartwigsen G, Lim JS, Bae HJ, Yu KH, Kuijf HJ, Weaver NA, Biesbroek JM, Kopal J, Bzdok D. Bayesian modeling disentangles language versus executive control disruption in stroke. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.06.552147. [PMID: 37609325 PMCID: PMC10441359 DOI: 10.1101/2023.08.06.552147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Stroke is the leading cause of long-term disability worldwide. Incurred brain damage disrupts cognition, often with persisting deficits in language and executive capacities. Despite their clinical relevance, the commonalities, and differences of language versus executive control impairments remain under-specified. We tailored a Bayesian hierarchical modeling solution in a largest-of-its-kind cohort (1080 stroke patients) to deconvolve language and executive control in the brain substrates of stroke insults. Four cognitive factors distinguished left- and right-hemispheric contributions to ischemic tissue lesion. One factor delineated language and general cognitive performance and was mainly associated with damage to left-hemispheric brain regions in the frontal and temporal cortex. A factor for executive control summarized control and visual-constructional abilities. This factor was strongly related to right-hemispheric brain damage of posterior regions in the occipital cortex. The interplay of language and executive control was reflected in two factors: executive speech functions and verbal memory. Impairments on both were mainly linked to left-hemispheric lesions. These findings shed light onto the causal implications of hemispheric specialization for cognition; and make steps towards subgroup-specific treatment protocols after stroke.
Collapse
|
7
|
Philips M, Schneck SM, Levy DF, Wilson SM. Modality-Specificity of the Neural Correlates of Linguistic and Non-Linguistic Demand. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:516-535. [PMID: 37841966 PMCID: PMC10575553 DOI: 10.1162/nol_a_00114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 06/28/2023] [Indexed: 10/17/2023]
Abstract
Imaging studies of language processing in clinical populations can be complicated to interpret for several reasons, one being the difficulty of matching the effortfulness of processing across individuals or tasks. To better understand how effortful linguistic processing is reflected in functional activity, we investigated the neural correlates of task difficulty in linguistic and non-linguistic contexts in the auditory modality and then compared our findings to a recent analogous experiment in the visual modality in a different cohort. Nineteen neurologically normal individuals were scanned with fMRI as they performed a linguistic task (semantic matching) and a non-linguistic task (melodic matching), each with two levels of difficulty. We found that left hemisphere frontal and temporal language regions, as well as the right inferior frontal gyrus, were modulated by linguistic demand and not by non-linguistic demand. This was broadly similar to what was previously observed in the visual modality. In contrast, the multiple demand (MD) network, a set of brain regions thought to support cognitive flexibility in many contexts, was modulated neither by linguistic demand nor by non-linguistic demand in the auditory modality. This finding was in striking contradistinction to what was previously observed in the visual modality, where the MD network was robustly modulated by both linguistic and non-linguistic demand. Our findings suggest that while the language network is modulated by linguistic demand irrespective of modality, modulation of the MD network by linguistic demand is not inherent to linguistic processing, but rather depends on specific task factors.
Collapse
Affiliation(s)
- Mackenzie Philips
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sarah M. Schneck
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Deborah F. Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M. Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- School of Health and Rehabilitation Sciences, University of Queensland, Brisbane, Australia
| |
Collapse
|
8
|
Herrera C, Whittle N, Leek MR, Brodbeck C, Lee G, Barcenas C, Barnes S, Holshouser B, Yi A, Venezia JH. Cortical networks for recognition of speech with simultaneous talkers. Hear Res 2023; 437:108856. [PMID: 37531847 DOI: 10.1016/j.heares.2023.108856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).
Collapse
Affiliation(s)
| | - Nicole Whittle
- VA Loma Linda Healthcare System, Loma Linda, CA, United States
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | | | - Grace Lee
- Loma Linda University, Loma Linda, CA, United States
| | | | - Samuel Barnes
- Loma Linda University, Loma Linda, CA, United States
| | | | - Alex Yi
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | - Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States.
| |
Collapse
|
9
|
Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage 2023; 278:120271. [PMID: 37442310 PMCID: PMC10460966 DOI: 10.1016/j.neuroimage.2023.120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/20/2023] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
10
|
Sheng Y, Yang S, Rao J, Zhang Q, Li J, Wang D, Zheng W. Age of Bilingual Onset Shapes the Dynamics of Functional Connectivity and Laterality in the Resting-State. Brain Sci 2023; 13:1231. [PMID: 37759832 PMCID: PMC10526135 DOI: 10.3390/brainsci13091231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/21/2023] [Indexed: 09/29/2023] Open
Abstract
Bilingualism is known to enhance cognitive function and flexibility of the brain. However, it is not clear how bilingual experience affects the time-varying functional network and whether these changes depend on the age of bilingual onset. This study intended to investigate the bilingual-related dynamic functional connectivity (dFC) based on the resting-state functional magnetic resonance images, including 23 early bilinguals (EBs), 30 late bilinguals (LBs), and 31 English monolinguals. The analysis identified two dFC states, and LBs showed more transitions between these states than monolinguals. Moreover, more frequent left-right switches were found in functional laterality in prefrontal, lateral temporal, lateral occipital, and inferior parietal cortices in EBs compared with LB and monolingual cohorts, and the laterality changes in the anterior superior temporal cortex were negatively correlated with L2 proficiency. These findings highlight how the age of L2 acquisition affects cortico-cortical dFC pattern and provide insight into the neural mechanisms of bilingualism.
Collapse
Affiliation(s)
- Yucen Sheng
- School of Foreign Languages, Lanzhou Jiaotong University, Lanzhou 730070, China
| | - Songyu Yang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Juan Rao
- School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China
| | - Qin Zhang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Jialong Li
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| | - Dianjian Wang
- School of Foreign Languages, Lanzhou Jiaotong University, Lanzhou 730070, China
| | - Weihao Zheng
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
| |
Collapse
|
11
|
Cartocci G, Inguscio BMS, Giorgi A, Vozzi A, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Fetoni AR, Freni F, Ciodaro F, Galletti F, Albera R, Canale A, Piccioni LO, Babiloni F. Music in noise recognition: An EEG study of listening effort in cochlear implant users and normal hearing controls. PLoS One 2023; 18:e0288461. [PMID: 37561758 PMCID: PMC10414671 DOI: 10.1371/journal.pone.0288461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/27/2023] [Indexed: 08/12/2023] Open
Abstract
Despite the plethora of studies investigating listening effort and the amount of research concerning music perception by cochlear implant (CI) users, the investigation of the influence of background noise on music processing has never been performed. Given the typical speech in noise recognition task for the listening effort assessment, the aim of the present study was to investigate the listening effort during an emotional categorization task on musical pieces with different levels of background noise. The listening effort was investigated, in addition to participants' ratings and performances, using EEG features known to be involved in such phenomenon, that is alpha activity in parietal areas and in the left inferior frontal gyrus (IFG), that includes the Broca's area. Results showed that CI users performed worse than normal hearing (NH) controls in the recognition of the emotional content of the stimuli. Furthermore, when considering the alpha activity corresponding to the listening to signal to noise ratio (SNR) 5 and SNR10 conditions subtracted of the activity while listening to the Quiet condition-ideally removing the emotional content of the music and isolating the difficulty level due to the SNRs- CI users reported higher levels of activity in the parietal alpha and in the homologous of the left IFG in the right hemisphere (F8 EEG channel), in comparison to NH. Finally, a novel suggestion of a particular sensitivity of F8 for SNR-related listening effort in music was provided.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Andrea Giorgi
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Carlo Antonio Leone
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Tiziana Di Cesare
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Anna Rita Fetoni
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Ciodaro
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Galletti
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Roberto Albera
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Andrea Canale
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Lucia Oriella Piccioni
- Department of Otolaryngology-Head and Neck Surgery, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| |
Collapse
|
12
|
Cartocci G, Inguscio BMS, Giliberto G, Vozzi A, Giorgi A, Greco A, Babiloni F, Attanasio G. Listening Effort in Tinnitus: A Pilot Study Employing a Light EEG Headset and Skin Conductance Assessment during the Listening to a Continuous Speech Stimulus under Different SNR Conditions. Brain Sci 2023; 13:1084. [PMID: 37509014 PMCID: PMC10377270 DOI: 10.3390/brainsci13071084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/07/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
Background noise elicits listening effort. What else is tinnitus if not an endogenous background noise? From such reasoning, we hypothesized the occurrence of increased listening effort in tinnitus patients during listening tasks. Such a hypothesis was tested by investigating some indices of listening effort through electroencephalographic and skin conductance, particularly parietal and frontal alpha and electrodermal activity (EDA). Furthermore, tinnitus distress questionnaires (THI and TQ12-I) were employed. Parietal alpha values were positively correlated to TQ12-I scores, and both were negatively correlated to EDA; Pre-stimulus frontal alpha correlated with the THI score in our pilot study; finally, results showed a general trend of increased frontal alpha activity in the tinnitus group in comparison to the control group. Parietal alpha during the listening to stimuli, positively correlated to the TQ12-I, appears to reflect a higher listening effort in tinnitus patients and the perception of tinnitus symptoms. The negative correlation between both listening effort (parietal alpha) and tinnitus symptoms perception (TQ12-I scores) with EDA levels could be explained by a less responsive sympathetic nervous system to prepare the body to expend increased energy during the "fight or flight" response, due to pauperization of energy from tinnitus perception.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
| | - Bianca Maria Serena Inguscio
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- Department of Human Neuroscience, Sapienza University of Rome, 00185 Rome, Italy
| | - Giovanna Giliberto
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
| | - Alessia Vozzi
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- SAIMLAL Department, Sapienza University of Rome, 00185 Rome, Italy
| | - Andrea Giorgi
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- SAIMLAL Department, Sapienza University of Rome, 00185 Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, 00161 Rome, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, 00161 Rome, Italy
- Department of Research and Development, BrainSigns Ltd., 00198 Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou 310005, China
| | | |
Collapse
|
13
|
Shatzer HE, Russo FA. Brightening the Study of Listening Effort with Functional Near-Infrared Spectroscopy: A Scoping Review. Semin Hear 2023; 44:188-210. [PMID: 37122884 PMCID: PMC10147513 DOI: 10.1055/s-0043-1766105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
Abstract
Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.
Collapse
Affiliation(s)
- Hannah E. Shatzer
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
14
|
Eckert MA, Iuricich F, Harris KC, Hamlett ED, Vazey EM, Aston-Jones G. Locus coeruleus and dorsal cingulate morphology contributions to slowed processing speed. Neuropsychologia 2023; 179:108449. [PMID: 36528219 PMCID: PMC9906468 DOI: 10.1016/j.neuropsychologia.2022.108449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 12/10/2022] [Accepted: 12/13/2022] [Indexed: 12/15/2022]
Abstract
Slowed information processing speed is a defining feature of cognitive aging. Nucleus locus coeruleus (LC) and medial prefrontal regions are targets for understanding slowed processing speed because these brain regions influence neural and behavioral response latencies through their roles in optimizing task performance. Although structural measures of medial prefrontal cortex have been consistently related to processing speed, it is unclear if 1) declines in LC structure underlie this association because of reciprocal connections between LC and medial prefrontal cortex, or 2) if LC declines provide a separate explanation for age-related changes in processing speed. LC and medial prefrontal structural measures were predicted to explain age-dependent individual differences in processing speed in a cross-sectional sample of 43 adults (19-79 years; 63% female). Higher turbo-spin echo LC contrast, based on a persistent homology measure, and greater dorsal cingulate cortical thickness were significantly and each uniquely related to faster processing speed. However, only dorsal cingulate cortical thickness appeared to statistically mediate age-related differences in processing speed. The results suggest that individual differences in cognitive processing speed can be attributed, in part, to structural variation in nucleus LC and medial prefrontal cortex, with the latter key to understanding why older adults exhibit slowed processing speed.
Collapse
Affiliation(s)
- Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, MSC 550, Charleston, S.C., 29425-5500, USA.
| | - Federico Iuricich
- Visual Computing Division, School of Computing, Clemson University, Clemson, S.C., 29634, USA
| | - Kelly C Harris
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, MSC 550, Charleston, S.C., 29425-5500, USA
| | - Eric D Hamlett
- Department of Pathology and Laboratory Medicine, Medical University of South Carolina, Charleston, S.C., 29425-5500, USA
| | - Elena M Vazey
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003-9297, USA
| | - Gary Aston-Jones
- Brain Health Institute, Rutgers University/Rutgers Biomedical and Health Sciences, Piscataway, NJ, 08854, USA
| |
Collapse
|
15
|
MacGregor LJ, Gilbert RA, Balewski Z, Mitchell DJ, Erzinçlioğlu SW, Rodd JM, Duncan J, Fedorenko E, Davis MH. Causal Contributions of the Domain-General (Multiple Demand) and the Language-Selective Brain Networks to Perceptual and Semantic Challenges in Speech Comprehension. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:665-698. [PMID: 36742011 PMCID: PMC9893226 DOI: 10.1162/nol_a_00081] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 09/07/2022] [Indexed: 06/18/2023]
Abstract
Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (n = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.
Collapse
Affiliation(s)
- Lucy J. MacGregor
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Rebecca A. Gilbert
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Zuzanna Balewski
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
| | - Daniel J. Mitchell
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Jennifer M. Rodd
- Psychology and Language Sciences, University College London, London, UK
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA
| | - Matthew H. Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| |
Collapse
|
16
|
Lanzilotti C, Andéol G, Micheyl C, Scannella S. Cocktail party training induces increased speech intelligibility and decreased cortical activity in bilateral inferior frontal gyri. A functional near-infrared study. PLoS One 2022; 17:e0277801. [PMID: 36454948 PMCID: PMC9714910 DOI: 10.1371/journal.pone.0277801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
The human brain networks responsible for selectively listening to a voice amid other talkers remain to be clarified. The present study aimed to investigate relationships between cortical activity and performance in a speech-in-speech task, before (Experiment I) and after training-induced improvements (Experiment II). In Experiment I, 74 participants performed a speech-in-speech task while their cortical activity was measured using a functional near infrared spectroscopy (fNIRS) device. One target talker and one masker talker were simultaneously presented at three different target-to-masker ratios (TMRs): adverse, intermediate and favorable. Behavioral results show that performance may increase monotonically with TMR in some participants and failed to decrease, or even improved, in the adverse-TMR condition for others. On the neural level, an extensive brain network including the frontal (left prefrontal cortex, right dorsolateral prefrontal cortex and bilateral inferior frontal gyri) and temporal (bilateral auditory cortex) regions was more solicited by the intermediate condition than the two others. Additionally, bilateral frontal gyri and left auditory cortex activities were found to be positively correlated with behavioral performance in the adverse-TMR condition. In Experiment II, 27 participants, whose performance was the poorest in the adverse-TMR condition of Experiment I, were trained to improve performance in that condition. Results show significant performance improvements along with decreased activity in bilateral inferior frontal gyri, the right dorsolateral prefrontal cortex, the left inferior parietal cortex and the right auditory cortex in the adverse-TMR condition after training. Arguably, lower neural activity reflects higher efficiency in processing masker inhibition after speech-in-speech training. As speech-in-noise tasks also imply frontal and temporal regions, we suggest that regardless of the type of masking (speech or noise) the complexity of the task will prompt the implication of a similar brain network. Furthermore, the initial significant cognitive recruitment will be reduced following a training leading to an economy of cognitive resources.
Collapse
Affiliation(s)
- Cosima Lanzilotti
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
- ISAE-SUPAERO, Université de Toulouse, Toulouse, France
- Thales SIX GTS France, Gennevilliers, France
| | - Guillaume Andéol
- Département Neuroscience et Sciences Cognitives, Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
| | | | | |
Collapse
|
17
|
Hebert JR, Filley CM. Multisensory integration and white matter pathology: Contributions to cognitive dysfunction. Front Neurol 2022; 13:1051538. [PMID: 36408503 PMCID: PMC9668060 DOI: 10.3389/fneur.2022.1051538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 10/18/2022] [Indexed: 11/23/2022] Open
Abstract
The ability to simultaneously process and integrate multiple sensory stimuli is paramount to effective daily function and essential for normal cognition. Multisensory management depends critically on the interplay between bottom-up and top-down processing of sensory information, with white matter (WM) tracts acting as the conduit between cortical and subcortical gray matter (GM) regions. White matter tracts and GM structures operate in concert to manage both multisensory signals and cognition. Altered sensory processing leads to difficulties in reweighting and modulating multisensory input during various routine environmental challenges, and thus contributes to cognitive dysfunction. To examine the specific role of WM in altered sensory processing and cognitive dysfunction, this review focuses on two neurologic disorders with diffuse WM pathology, multiple sclerosis and mild traumatic brain injury, in which persistently altered sensory processing and cognitive impairment are common. In these disorders, cognitive dysfunction in association with altered sensory processing may develop initially from slowed signaling in WM tracts and, in some cases, GM pathology secondary to WM disruption, but also because of interference with cognitive function by the added burden of managing concurrent multimodal primary sensory signals. These insights promise to inform research in the neuroimaging, clinical assessment, and treatment of WM disorders, and the investigation of WM-behavior relationships.
Collapse
Affiliation(s)
- Jeffrey R. Hebert
- Physical Performance Laboratory, Marcus Institute for Brain Health, University of Colorado School of Medicine, Aurora, CO, United States,*Correspondence: Jeffrey R. Hebert
| | - Christopher M. Filley
- Behavorial Neurology Section, Department of Neurology and Psychiatry, Marcus Institute for Brain Health, University of Colorado School of Medicine, Aurora, CO, United States
| |
Collapse
|
18
|
Droby A, Varangis E, Habeck C, Hausdorff JM, Stern Y, Mirelman A, Maidan I. Effects of aging on cognitive and brain inter-network integration patterns underlying usual and dual-task gait performance. Front Aging Neurosci 2022; 14:956744. [PMID: 36247996 PMCID: PMC9557358 DOI: 10.3389/fnagi.2022.956744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Aging affects the interplay between cognition and gait performance. Neuroimaging studies reported associations between gait performance and structural measures; however, functional connectivity (FC) analysis of imaging data can help to identify dynamic neural mechanisms underlying optimal performance. Here, we investigated the effects on divergent cognitive and inter-network FC patterns underlying gait performance during usual (UW) and dual-task (DT) walking. Methods A total of 115 community-dwelling, healthy participants between 20 and 80 years were enrolled. All participants underwent comprehensive cognitive and gait assessments in two conditions and resting state functional MRI (fMRI) scans. Inter-network FC from motor-related to 6 primary cognitive networks were estimated. Step-wise regression models tested the relationships between gait parameters, inter-network FC, neuropsychological scores, and demographic variables. A threshold of p < 0.05 was adopted for all statistical analyses. Results UW was largely associated with FC levels between motor and sustained attention networks. DT performance was associated with inter-network FC between motor and divided attention, and processing speed in the overall group. In young adults, UW was associated with inter-network FC between motor and sustained attention networks. On the other hand, DT performance was associated with cognitive performance, as well as inter-network connectivity between motor and divided attention networks (VAN and SAL). In contrast, the older age group (> 65 years) showed increased integration between motor, dorsal, and ventral attention, as well as default-mode networks, which was negatively associated with UW gait performance. Inverse associations between motor and sustained attention inter-network connectivity and DT performance were observed. Conclusion While UW relies on inter-network FC between motor and sustained attention networks, DT performance relies on additional cognitive capacities, increased motor, and executive control network integration. FC analyses demonstrate that the decline in cognitive performance with aging leads to the reliance on additional neural resources to maintain routine walking tasks.
Collapse
Affiliation(s)
- Amgad Droby
- Laboratory for Early Markers of Neurodegeneration (LEMON), Center for the Study of Movement, Cognition, and Mobility (CMCM), Tel Aviv Sourasky Medical Center, Neurological Institute, Tel Aviv, Israel
- Department of Neurology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Eleanna Varangis
- Cognitive Neuroscience Division, Department of Neurology, Columbia University, New York, NY, United States
| | - Christian Habeck
- Cognitive Neuroscience Division, Department of Neurology, Columbia University, New York, NY, United States
| | - Jeffrey M. Hausdorff
- Laboratory for Early Markers of Neurodegeneration (LEMON), Center for the Study of Movement, Cognition, and Mobility (CMCM), Tel Aviv Sourasky Medical Center, Neurological Institute, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv-Yafo, Israel
- Department of Orthopedic Surgery, Rush Alzheimer’s Disease Center, Rush University, Chicago, IL, United States
- Department of Physical Therapy, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Yaakov Stern
- Cognitive Neuroscience Division, Department of Neurology, Columbia University, New York, NY, United States
| | - Anat Mirelman
- Laboratory for Early Markers of Neurodegeneration (LEMON), Center for the Study of Movement, Cognition, and Mobility (CMCM), Tel Aviv Sourasky Medical Center, Neurological Institute, Tel Aviv, Israel
- Department of Neurology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Inbal Maidan
- Laboratory for Early Markers of Neurodegeneration (LEMON), Center for the Study of Movement, Cognition, and Mobility (CMCM), Tel Aviv Sourasky Medical Center, Neurological Institute, Tel Aviv, Israel
- Department of Neurology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv-Yafo, Israel
| |
Collapse
|
19
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
20
|
Ritz H, Wild CJ, Johnsrude IS. Parametric Cognitive Load Reveals Hidden Costs in the Neural Processing of Perfectly Intelligible Degraded Speech. J Neurosci 2022; 42:4619-4628. [PMID: 35508382 PMCID: PMC9186799 DOI: 10.1523/jneurosci.1777-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.
Collapse
Affiliation(s)
- Harrison Ritz
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912
| | - Conor J Wild
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Departments of Psychology and Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
21
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|
22
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
23
|
Age-related differences in the neural network interactions underlying the predictability gain. Cortex 2022; 154:269-286. [DOI: 10.1016/j.cortex.2022.05.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/30/2022] [Accepted: 05/03/2022] [Indexed: 11/20/2022]
|
24
|
Lim SJ, Thiel C, Sehm B, Deserno L, Lepsien J, Obleser J. Distributed networks for auditory memory differentially contribute to recall precision. Neuroimage 2022; 256:119227. [PMID: 35452804 DOI: 10.1016/j.neuroimage.2022.119227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 03/13/2022] [Accepted: 04/17/2022] [Indexed: 11/25/2022] Open
Abstract
Re-directing attention to objects in working memory can enhance their representational fidelity. However, how this attentional enhancement of memory representations is implemented across distinct, sensory and cognitive-control brain network is unspecified. The present fMRI experiment leverages psychophysical modelling and multivariate auditory-pattern decoding as behavioral and neural proxies of mnemonic fidelity. Listeners performed an auditory syllable pitch-discrimination task and received retro-active cues to selectively attend to a to-be-probed syllable in memory. Accompanied by increased neural activation in fronto-parietal and cingulo-opercular networks, valid retro-cues yielded faster and more perceptually sensitive responses in recalling acoustic detail of memorized syllables. Information about the cued auditory object was decodable from hemodynamic response patterns in superior temporal sulcus (STS), fronto-parietal, and sensorimotor regions. However, among these regions retaining auditory memory objects, neural fidelity in the left STS and its enhancement through attention-to-memory best predicted individuals' gain in auditory memory recall precision. Our results demonstrate how functionally discrete brain regions differentially contribute to the attentional enhancement of memory representations.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Psychology, University of Lübeck, Maria-Goeppert-Str. 9a, Lübeck 23562, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany; Department of Psychology, Binghamton University, State University of New York, 4400 Vestal Parkway E, Vestal, Binghamton, NY 13902, USA; Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, USA.
| | - Christiane Thiel
- Department of Psychology, Carl von Ossietzky University of Oldenburg, Oldenburg 26129, Germany
| | - Bernhard Sehm
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lorenz Deserno
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Jöran Lepsien
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Maria-Goeppert-Str. 9a, Lübeck 23562, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany; Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck 23562, Germany.
| |
Collapse
|
25
|
Eckert MA, Vaden KI, Iuricich F. Cortical asymmetries at different spatial hierarchies relate to phonological processing ability. PLoS Biol 2022; 20:e3001591. [PMID: 35381012 PMCID: PMC8982829 DOI: 10.1371/journal.pbio.3001591] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 03/03/2022] [Indexed: 11/22/2022] Open
Abstract
The ability to map speech sounds to corresponding letters is critical for establishing proficient reading. People vary in this phonological processing ability, which has been hypothesized to result from variation in hemispheric asymmetries within brain regions that support language. A cerebral lateralization hypothesis predicts that more asymmetric brain structures facilitate the development of foundational reading skills like phonological processing. That is, structural asymmetries are predicted to linearly increase with ability. In contrast, a canalization hypothesis predicts that asymmetries constrain behavioral performance within a normal range. That is, structural asymmetries are predicted to quadratically relate to phonological processing, with average phonological processing occurring in people with the most asymmetric structures. These predictions were examined in relatively large samples of children (N = 424) and adults (N = 300), using a topological asymmetry analysis of T1-weighted brain images and a decoding measure of phonological processing. There was limited evidence of structural asymmetry and phonological decoding associations in classic language-related brain regions. However, and in modest support of the cerebral lateralization hypothesis, small to medium effect sizes were observed where phonological decoding accuracy increased with the magnitude of the largest structural asymmetry across left hemisphere cortical regions, but not right hemisphere cortical regions, for both the adult and pediatric samples. In support of the canalization hypothesis, small to medium effect sizes were observed where phonological decoding in the normal range was associated with increased asymmetries in specific cortical regions for both the adult and pediatric samples, which included performance monitoring and motor planning brain regions that contribute to oral and written language functions. Thus, the relevance of each hypothesis to phonological decoding may depend on the scale of brain organization.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America
| | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America
| | - Federico Iuricich
- Visual Computing Division, School of Computing, Clemson University, Clemson, South Carolina, United States of America
| | | |
Collapse
|
26
|
Hardcastle C, Hausman HK, Kraft JN, Albizu A, Evangelista ND, Boutzoukas EM, O'Shea A, Langer K, Van Van Etten E, Bharadwaj PK, Song H, Smith SG, Porges E, DeKosky ST, Hishaw GA, Wu SS, Marsiske M, Cohen R, Alexander GE, Woods AJ. Higher-order resting state network association with the useful field of view task in older adults. GeroScience 2022; 44:131-145. [PMID: 34431043 PMCID: PMC8810967 DOI: 10.1007/s11357-021-00441-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 08/12/2021] [Indexed: 10/20/2022] Open
Abstract
Speed-of-processing abilities decline with age yet are important in performing instrumental activities of daily living. The useful field of view, or Double Decision task, assesses speed-of-processing and divided attention. Performance on this task is related to attention, executive functioning, and visual processing abilities in older adults, and poorer performance predicts more motor vehicle accidents in the elderly. Cognitive training in this task reduces risk of dementia. Structural and functional neural correlates of this task suggest that higher-order resting state networks may be associated with performance on the Double Decision task, although this has never been explored. This study aimed to assess the association of within-network connectivity of the default mode network, dorsal attention network, frontoparietal control network, and cingulo-opercular network with Double Decision task performance, and subcomponents of this task in a sample of 267 healthy older adults. Multiple linear regressions showed that connectivity of the cingulo-opercular network is associated with visual speed-of-processing and divided attention subcomponents of the Double Decision task. Cingulo-opercular network and frontoparietal control network connectivity is associated with Double Decision task performance. Stronger connectivity is related to better performance in all cases. These findings confirm the unique role of the cingulo-opercular network in visual attention and sustained divided attention. Frontoparietal control network connectivity, in addition to cingulo-opercular network connectivity, is related to Double Decision task performance, a task implicated in reduced dementia risk. Future research should explore the role these higher-order networks play in reduced dementia risk after cognitive intervention using the Double Decision task.
Collapse
Affiliation(s)
- Cheshire Hardcastle
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Hanna K Hausman
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Jessica N Kraft
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Neuroscience, University of Florida, Gainesville, FL, USA
| | - Alejandro Albizu
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Neuroscience, University of Florida, Gainesville, FL, USA
| | - Nicole D Evangelista
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Emanuel M Boutzoukas
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Andrew O'Shea
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Kailey Langer
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Emily Van Van Etten
- Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA
- Department of Psychology, University of Arizona, Tucson, AZ, USA
| | - Pradyumna K Bharadwaj
- Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA
- Department of Psychology, University of Arizona, Tucson, AZ, USA
| | - Hyun Song
- Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA
- Department of Psychology, University of Arizona, Tucson, AZ, USA
| | - Samantha G Smith
- Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA
- Department of Psychology, University of Arizona, Tucson, AZ, USA
| | - Eric Porges
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Steven T DeKosky
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Neurology and McKnight Brain Institute, University of Florida, Gainesville, FL, USA
| | - Georg A Hishaw
- Department of Neurology, University of Arizona, Tucson, AZ, USA
- Department of Psychiatry, University of Arizona, Tucson, AZ, USA
| | - Samuel S Wu
- Department of Biostatistics, University of Florida, Gainesville, FL, USA
| | - Michael Marsiske
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Ronald Cohen
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA
| | - Gene E Alexander
- Evelyn F. McKnight Brain Institute, University of Arizona, Tucson, AZ, USA
- Department of Psychology, University of Arizona, Tucson, AZ, USA
- Department of Psychiatry, University of Arizona, Tucson, AZ, USA
| | - Adam J Woods
- Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, FL, USA.
- Department of Clinical and Health Psychology, University of Florida, Gainesville, FL, USA.
- Department of Neuroscience, University of Florida, Gainesville, FL, USA.
| |
Collapse
|
27
|
Sun PW, Hines A. Listening Effort Informed Quality of Experience Evaluation. Front Psychol 2022; 12:767840. [PMID: 35069342 PMCID: PMC8766726 DOI: 10.3389/fpsyg.2021.767840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/31/2021] [Indexed: 11/15/2022] Open
Abstract
Perceived quality of experience for speech listening is influenced by cognitive processing and can affect a listener's comprehension, engagement and responsiveness. Quality of Experience (QoE) is a paradigm used within the media technology community to assess media quality by linking quantifiable media parameters to perceived quality. The established QoE framework provides a general definition of QoE, categories of possible quality influencing factors, and an identified QoE formation pathway. These assist researchers to implement experiments and to evaluate perceived quality for any applications. The QoE formation pathways in the current framework do not attempt to capture cognitive effort effects and the standard experimental assessments of QoE minimize the influence from cognitive processes. The impact of cognitive processes and how they can be captured within the QoE framework have not been systematically studied by the QoE research community. This article reviews research from the fields of audiology and cognitive science regarding how cognitive processes influence the quality of listening experience. The cognitive listening mechanism theories are compared with the QoE formation mechanism in terms of the quality contributing factors, experience formation pathways, and measures for experience. The review prompts a proposal to integrate mechanisms from audiology and cognitive science into the existing QoE framework in order to properly account for cognitive load in speech listening. The article concludes with a discussion regarding how an extended framework could facilitate measurement of QoE in broader and more realistic application scenarios where cognitive effort is a material consideration.
Collapse
Affiliation(s)
- Pheobe Wenyi Sun
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| | - Andrew Hines
- QxLab, School of Computer Science, University College Dublin, Dublin, Ireland
| |
Collapse
|
28
|
McClannahan KS, Mainardi A, Luor A, Chiu YF, Sommers MS, Peelle JE. Spoken Word Recognition in Listeners with Mild Dementia Symptoms. J Alzheimers Dis 2022; 90:749-759. [PMID: 36189586 PMCID: PMC9885492 DOI: 10.3233/jad-215606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Difficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. However, higher-level cognitive processes play a key role in successful communication in noise. Limited cognitive resources in adults with dementia may therefore hamper word recognition. OBJECTIVE The goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise. METHODS Participants were 53-86 years with (n = 16) or without (n = 32) dementia symptoms as classified by the Clinical Dementia Rating scale. Participants performed a word identification task with two levels of word difficulty (few and many similar sounding words) in quiet and in noise at two signal-to-noise ratios, +6 and +3 dB. Our hypothesis was that listeners with mild dementia symptoms would have more difficulty with speech perception in noise under conditions that tax cognitive resources. RESULTS Listeners with mild dementia symptoms had poorer task accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, word difficulty was not a factor in task performance for either group. CONCLUSION These results affirm the difficulty that listeners with mild dementia may have with spoken word recognition, both in quiet and in background noise, consistent with a role of cognitive resources in spoken word identification.
Collapse
Affiliation(s)
| | - Amelia Mainardi
- Department of Otolaryngology, Washington University in St. Louis
| | - Austin Luor
- Department of Otolaryngology, Washington University in St. Louis
| | - Yi-Fang Chiu
- Department of Speech, Language and Hearing Sciences, Saint Louis University
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | | |
Collapse
|
29
|
Fitzhugh MC, Pa J. Longitudinal Changes in Resting-State Functional Connectivity and Gray Matter Volume Are Associated with Conversion to Hearing Impairment in Older Adults. J Alzheimers Dis 2022; 86:905-918. [PMID: 35147536 PMCID: PMC10796152 DOI: 10.3233/jad-215288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Hearing loss was recently identified as a modifiable risk factor for dementia although the potential mechanisms explaining this relationship are unknown. OBJECTIVE The current study examined longitudinal change in resting-state fMRI functional connectivity and gray matter volume in individuals who developed a hearing impairment compared to those whose hearing remained normal. METHODS This study included 440 participants from the UK Biobank: 163 who had normal hearing at baseline and impaired hearing at follow-up (i.e., converters, mean age = 63.11±6.33, 53% female) and 277 who had normal hearing at baseline and maintained normal hearing at follow-up (i.e., non-converters, age = 63.31±5.50, 50% female). Functional connectivity was computed between a priori selected auditory seed regions (left and right Heschl's gyrus and cytoarchitectonic subregions Te1.0, Te1.1, and Te1.2) and select higher-order cognitive brain networks. Gray matter volume within these same regions was also obtained. RESULTS Converters had increased connectivity from left Heschl's gyrus to left anterior insula and from right Heschl's gyrus to right anterior insula, and decreased connectivity between right Heschl's gyrus and right hippocampus, compared to non-converters. Converters also had reduced gray matter volume in left hippocampus and left lateral visual cortex compared to non-converters. CONCLUSION These findings suggest that conversion to a hearing impairment is associated with altered brain functional connectivity and gray matter volume in the attention, memory, and visual processing regions that were examined in this study.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Judy Pa
- Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Department of Neurology, Alzheimer’s Disease Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
30
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | | | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Carolyn M. McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| |
Collapse
|
31
|
Berro DH, Lemée JM, Leiber LM, Emery E, Menei P, Ter Minassian A. Overt speech critically changes lateralization index and did not allow determination of hemispheric dominance for language: an fMRI study. BMC Neurosci 2021; 22:74. [PMID: 34852787 PMCID: PMC8638205 DOI: 10.1186/s12868-021-00671-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 09/09/2021] [Indexed: 11/25/2022] Open
Abstract
Background Pre-surgical mapping of language using functional MRI aimed principally to determine the dominant hemisphere. This mapping is currently performed using covert linguistic task in way to avoid motion artefacts potentially biasing the results. However, overt task is closer to natural speaking, allows a control on the performance of the task, and may be easier to perform for stressed patients and children. However, overt task, by activating phonological areas on both hemispheres and areas involved in pitch prosody control in the non-dominant hemisphere, is expected to modify the determination of the dominant hemisphere by the calculation of the lateralization index (LI). Objective Here, we analyzed the modifications in the LI and the interactions between cognitive networks during covert and overt speech task. Methods Thirty-three volunteers participated in this study, all but four were right-handed. They performed three functional sessions consisting of (1) covert and (2) overt generation of a short sentence semantically linked with an audibly presented word, from which we estimated the “Covert” and “Overt” contrasts, and a (3) resting-state session. The resting-state session was submitted to spatial independent component analysis to identify language network at rest (LANG), cingulo-opercular network (CO), and ventral attention network (VAN). The LI was calculated using the bootstrapping method. Results The LI of the LANG was the most left-lateralized (0.66 ± 0.38). The LI shifted from a moderate leftward lateralization for the Covert contrast (0.32 ± 0.38) to a right lateralization for the Overt contrast (− 0.13 ± 0.30). The LI significantly differed from each other. This rightward shift was due to the recruitment of right hemispheric temporal areas together with the nodes of the CO. Conclusion Analyzing the overt speech by fMRI allowed improvement in the physiological knowledge regarding the coordinated activity of the intrinsic connectivity networks. However, the rightward shift of the LI in this condition did not provide the basic information on the hemispheric language dominance. Overt linguistic task cannot be recommended for clinical purpose when determining hemispheric dominance for language. Supplementary Information The online version contains supplementary material available at 10.1186/s12868-021-00671-y.
Collapse
Affiliation(s)
- David Hassanein Berro
- Department of Neurosurgery, University Hospital of Caen Normandy, Avenue de la Côte de Nacre, 14000, Caen, France. .,Normandie Univ, UNICAEN, CEA, CNRS, ISTCT/CERVOxy group, GIP Cyceron, Caen, France. .,INSERM, CRCINA, Team 17, IRIS building, Angers, France.
| | - Jean-Michel Lemée
- INSERM, CRCINA, Team 17, IRIS building, Angers, France.,Department of Neurosurgery, University Hospital of Angers, Angers, France
| | | | - Evelyne Emery
- Department of Neurosurgery, University Hospital of Caen Normandy, Avenue de la Côte de Nacre, 14000, Caen, France.,INSERM, UMR-S U1237, PhIND group, GIP Cyceron, Caen, France
| | - Philippe Menei
- INSERM, CRCINA, Team 17, IRIS building, Angers, France.,Department of Neurosurgery, University Hospital of Angers, Angers, France
| | - Aram Ter Minassian
- Department of Anesthesiology, University Hospital of Angers, Angers, France.,LARIS, ISISV team, University of Angers, Angers, France
| |
Collapse
|
32
|
Modi S, He X, Chaudhary K, Hinds W, Crow A, Beloor-Suresh A, Sperling MR, Tracy JI. Multiple-brain systems dynamically interact during tonic and phasic states to support language integrity in temporal lobe epilepsy. NEUROIMAGE-CLINICAL 2021; 32:102861. [PMID: 34688143 PMCID: PMC8536775 DOI: 10.1016/j.nicl.2021.102861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 09/10/2021] [Accepted: 10/13/2021] [Indexed: 11/18/2022]
Abstract
Unique brain dynamics occur during language task in left temporal lobe epilepsy (TLE). Multiple brain systems interact to implement compensated language status in TLE. Tonic/rest dynamics exert influence and may prime the level of phasic/task dynamics. Multi-network integrations are compensatory in patients with lower language skills.
An epileptogenic focus in the dominant temporal lobe can result in the reorganization of language systems in order to compensate for compromised functions. We studied the compensatory reorganization of language in the setting of left temporal lobe epilepsy (TLE), taking into account the interaction of language (L) with key non-language (NL) networks such as dorsal attention (DAN), fronto-parietal (FPN) and cingulo-opercular (COpN), with these systems providing cognitive resources helpful for successful language performance. We applied tools from dynamic network neuroscience to functional MRI data collected from 23 TLE patients and 23 matched healthy controls during the resting state (RS) and a sentence completion (SC) task to capture how the functional architecture of a language network dynamically changes and interacts with NL systems in these two contexts. We provided evidence that the brain areas in which core language functions reside dynamically interact with non-language functional networks to carry out linguistic functions. We demonstrated that abnormal integrations between the language and DAN existed in TLE, and were present both in tonic as well as phasic states. This integration was considered to reflect the entrainment of visual attention systems to the systems dedicated to lexical semantic processing. Our data made clear that the level of baseline integrations between the language subsystems and certain NL systems (e.g., DAN, FPN) had a crucial influence on the general level of task integrations between L/NL systems, with this a normative finding not unique to epilepsy. We also revealed that a broad set of task L/NL integrations in TLE are predictive of language competency, indicating that these integrations are compensatory for patients with lower overall language skills. We concluded that RS establishes the broad set of L/NL integrations available and primed for use during task, but that the actual use of those interactions in the setting of TLE depended on the level of language skill. We believe our analyses are the first to capture the potential compensatory role played by dynamic network reconfigurations between multiple brain systems during performance of a complex language task, in addition to testing for characteristics in both the phasic/task and tonic/resting state that are necessary to achieve language competency in the setting of temporal lobe pathology. Our analyses highlighted the intra- versus inter-system communications that form the basis of unique language processing in TLE, pointing to the dynamic reconfigurations that provided the broad multi-system support needed to maintain language skill and competency.
Collapse
Affiliation(s)
- Shilpi Modi
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA
| | - Xiaosong He
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Kapil Chaudhary
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA
| | - Walter Hinds
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA
| | - Andrew Crow
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA
| | - Ashithkumar Beloor-Suresh
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA
| | - Michael R Sperling
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joseph I Tracy
- Department of Neurology, Comprehensive Epilepsy Centre, Thomas Jefferson University, Philadelphia, PA, USA.
| |
Collapse
|
33
|
Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021; 240:118385. [PMID: 34256138 PMCID: PMC8503862 DOI: 10.1016/j.neuroimage.2021.118385] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 10/27/2022] Open
Abstract
In this study we used functional near-infrared spectroscopy (fNIRS) to investigate neural responses in normal-hearing adults as a function of speech recognition accuracy, intelligibility of the speech stimulus, and the manner in which speech is distorted. Participants listened to sentences and reported aloud what they heard. Speech quality was distorted artificially by vocoding (simulated cochlear implant speech) or naturally by adding background noise. Each type of distortion included high and low-intelligibility conditions. Sentences in quiet were used as baseline comparison. fNIRS data were analyzed using a newly developed image reconstruction approach. First, elevated cortical responses in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) were associated with speech recognition during the low-intelligibility conditions. Second, activation in the MTG was associated with recognition of vocoded speech with low intelligibility, whereas MFG activity was largely driven by recognition of speech in background noise, suggesting that the cortical response varies as a function of distortion type. Lastly, an accuracy effect in the MFG demonstrated significantly higher activation during correct perception relative to incorrect perception of speech. These results suggest that normal-hearing adults (i.e., untrained listeners of vocoded stimuli) do not exploit the same attentional mechanisms of the frontal cortex used to resolve naturally degraded speech and may instead rely on segmental and phonetic analyses in the temporal lobe to discriminate vocoded speech.
Collapse
Affiliation(s)
- Jessica Defenderfer
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Samuel Forbes
- Psychology, University of East Anglia, Norwich, England.
| | | | - Mark Hedrick
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Patrick Plyler
- Speech and Hearing Science, University of Tennessee Health Science Center, Knoxville, TN, United States.
| | - Aaron T Buss
- Psychology, University of Tennessee, Knoxville, TN, United States.
| |
Collapse
|
34
|
Decoding Object-Based Auditory Attention from Source-Reconstructed MEG Alpha Oscillations. J Neurosci 2021; 41:8603-8617. [PMID: 34429378 DOI: 10.1523/jneurosci.0583-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/08/2021] [Accepted: 08/11/2021] [Indexed: 11/21/2022] Open
Abstract
How do we attend to relevant auditory information in complex naturalistic scenes? Much research has focused on detecting which information is attended, without regarding underlying top-down control mechanisms. Studies investigating attentional control generally manipulate and cue specific features in simple stimuli. However, in naturalistic scenes it is impossible to dissociate relevant from irrelevant information based on low-level features. Instead, the brain has to parse and select auditory objects of interest. The neural underpinnings of object-based auditory attention remain not well understood. Here we recorded MEG while 15 healthy human subjects (9 female) prepared for the repetition of an auditory object presented in one of two overlapping naturalistic auditory streams. The stream containing the repetition was prospectively cued with 70% validity. Crucially, this task could not be solved by attending low-level features, but only by processing the objects fully. We trained a linear classifier on the cortical distribution of source-reconstructed oscillatory activity to distinguish which auditory stream was attended. We could successfully classify the attended stream from alpha (8-14 Hz) activity in anticipation of repetition onset. Importantly, attention could only be classified from trials in which subjects subsequently detected the repetition, but not from miss trials. Behavioral relevance was further supported by a correlation between classification accuracy and detection performance. Decodability was not sustained throughout stimulus presentation, but peaked shortly before repetition onset, suggesting that attention acted transiently according to temporal expectations. We thus demonstrate anticipatory alpha oscillations to underlie top-down control of object-based auditory attention in complex naturalistic scenes.SIGNIFICANCE STATEMENT In everyday life, we often find ourselves bombarded with auditory information, from which we need to select what is relevant to our current goals. Previous research has highlighted how we attend to specific highly controlled aspects of the auditory input. Although invaluable, it is still unclear how this relates to attentional control in naturalistic auditory scenes. Here we used the high precision of magnetoencephalography in space and time to investigate the brain mechanisms underlying top-down control of object-based attention in ecologically valid sound scenes. We show that rhythmic activity in auditory association cortex at a frequency of ∼10 Hz (alpha waves) controls attention to currently relevant segments within the auditory scene and predicts whether these segments are subsequently detected.
Collapse
|
35
|
Reduced Semantic Context and Signal-to-Noise Ratio Increase Listening Effort As Measured Using Functional Near-Infrared Spectroscopy. Ear Hear 2021; 43:836-848. [PMID: 34623112 DOI: 10.1097/aud.0000000000001137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.
Collapse
|
36
|
Bhandari P, Demberg V, Kray J. Semantic Predictability Facilitates Comprehension of Degraded Speech in a Graded Manner. Front Psychol 2021; 12:714485. [PMID: 34566795 PMCID: PMC8459870 DOI: 10.3389/fpsyg.2021.714485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 08/06/2021] [Indexed: 01/02/2023] Open
Abstract
Previous studies have shown that at moderate levels of spectral degradation, semantic predictability facilitates language comprehension. It is argued that when speech is degraded, listeners have narrowed expectations about the sentence endings; i.e., semantic prediction may be limited to only most highly predictable sentence completions. The main objectives of this study were to (i) examine whether listeners form narrowed expectations or whether they form predictions across a wide range of probable sentence endings, (ii) assess whether the facilitatory effect of semantic predictability is modulated by perceptual adaptation to degraded speech, and (iii) use and establish a sensitive metric for the measurement of language comprehension. For this, we created 360 German Subject-Verb-Object sentences that varied in semantic predictability of a sentence-final target word in a graded manner (high, medium, and low) and levels of spectral degradation (1, 4, 6, and 8 channels noise-vocoding). These sentences were presented auditorily to two groups: One group (n =48) performed a listening task in an unpredictable channel context in which the degraded speech levels were randomized, while the other group (n =50) performed the task in a predictable channel context in which the degraded speech levels were blocked. The results showed that at 4 channels noise-vocoding, response accuracy was higher in high-predictability sentences than in the medium-predictability sentences, which in turn was higher than in the low-predictability sentences. This suggests that, in contrast to the narrowed expectations view, comprehension of moderately degraded speech, ranging from low- to high- including medium-predictability sentences, is facilitated in a graded manner; listeners probabilistically preactivate upcoming words from a wide range of semantic space, not limiting only to highly probable sentence endings. Additionally, in both channel contexts, we did not observe learning effects; i.e., response accuracy did not increase over the course of experiment, and response accuracy was higher in the predictable than in the unpredictable channel context. We speculate from these observations that when there is no trial-by-trial variation of the levels of speech degradation, listeners adapt to speech quality at a long timescale; however, when there is a trial-by-trial variation of the high-level semantic feature (e.g., sentence predictability), listeners do not adapt to low-level perceptual property (e.g., speech quality) at a short timescale.
Collapse
Affiliation(s)
- Pratik Bhandari
- Department of Psychology, Saarland University, Saarbrücken, Germany
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
- Department of Computer Science, Saarland University, Saarbrücken, Germany
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| |
Collapse
|
37
|
Abstract
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Collapse
Affiliation(s)
- Matthew B. Winn
- Matthew B. Winn, University of Minnesota, Twin Cities, 164 Pillsbury Dr SE, Minneapolis, MN Minnesota 55455, United States.
| | | |
Collapse
|
38
|
Jafari Z, Perani D, Kolb BE, Mohajerani MH. Bilingual experience and intrinsic functional connectivity in adults, aging, and Alzheimer's disease. Ann N Y Acad Sci 2021; 1505:8-22. [PMID: 34309857 DOI: 10.1111/nyas.14666] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/25/2021] [Accepted: 07/01/2021] [Indexed: 11/29/2022]
Abstract
The past decade marked the beginning of the use of resting-state functional connectivity (RSFC) imaging in bilingualism studies. This paper intends to review the latest evidence of changes in RSFC in language and cognitive control networks in bilinguals during adulthood, aging, and early Alzheimer's disease, which can add to our understanding of brain functional reshaping in the context of second language (L2) acquisition. Because of high variability in bilingual experience, recent studies mostly focus on the role of the main aspects of bilingual experience (age of acquisition (AoA), language proficiency, and language usage) on intrinsic functional connectivity (FC). Existing evidence accounts for stronger FC in simultaneous rather than sequential bilinguals in language and control networks, and the modulation of the AoA impact by language proficiency and usage. Studies on older bilingual adults show stronger FC in language and frontoparietal networks and preserved FC in posterior brain regions, which can protect the brain against cognitive decline and neurodegenerative processes. Altered RSFC in language and control networks subsequent to L2 training programs also is associated with improved global cognition in older adults. This review ends with a brief discussion of potential confounding factors in bilingualism research and conclusions and suggestions for future research.
Collapse
Affiliation(s)
- Zahra Jafari
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Daniela Perani
- Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Nuclear Medicine Unit, San Raffaele Hospital, Milan, Italy
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| |
Collapse
|
39
|
Hartwigsen G, Bengio Y, Bzdok D. How does hemispheric specialization contribute to human-defining cognition? Neuron 2021; 109:2075-2090. [PMID: 34004139 PMCID: PMC8273110 DOI: 10.1016/j.neuron.2021.04.024] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 03/22/2021] [Accepted: 04/26/2021] [Indexed: 12/30/2022]
Abstract
Uniquely human cognitive faculties arise from flexible interplay between specific local neural modules, with hemispheric asymmetries in functional specialization. Here, we discuss how these computational design principles provide a scaffold that enables some of the most advanced cognitive operations, such as semantic understanding of world structure, logical reasoning, and communication via language. We draw parallels to dual-processing theories of cognition by placing a focus on Kahneman's System 1 and System 2. We propose integration of these ideas with the global workspace theory to explain dynamic relay of information products between both systems. Deepening the current understanding of how neurocognitive asymmetry makes humans special can ignite the next wave of neuroscience-inspired artificial intelligence.
Collapse
Affiliation(s)
- Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Lise Meitner Research Group Cognition and Plasticity, Leipzig, Germany.
| | - Yoshua Bengio
- Mila, Montreal, QC, Canada; University of Montreal, Montreal, QC, Canada
| | - Danilo Bzdok
- Mila, Montreal, QC, Canada; Montreal Neurological Institute, McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada; Department of Biomedical Engineering, Faculty of Medicine, and School of Computer Science, McGill University, Montreal, QC, Canada.
| |
Collapse
|
40
|
White BE, Langdon C. The cortical organization of listening effort: New insight from functional near-infrared spectroscopy. Neuroimage 2021; 240:118324. [PMID: 34217787 DOI: 10.1016/j.neuroimage.2021.118324] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/17/2021] [Accepted: 06/28/2021] [Indexed: 10/21/2022] Open
Abstract
Everyday challenges impact our ability to hear and comprehend spoken language with ease, such as accented speech (source factors), spectral degradation (transmission factors), complex or unfamiliar language use (message factors), and predictability (context factors). Auditory degradation and linguistic complexity in the brain and behavior have been well investigated, and several computational models have emerged. The work here provides a novel test of the hypotheses that listening effort is partially reliant on higher cognitive auditory attention and working memory mechanisms in the frontal lobe, and partially reliant on hierarchical linguistic computation in the brain's left hemisphere. We specifically hypothesize that these models are robust and can be applied in ecologically relevant and coarse-grain contexts that rigorously control for acoustic and linguistic listening challenges. Using functional near-infrared spectroscopy during an auditory plausibility judgment task, we show the hierarchical cortical organization for listening effort in the frontal and left temporal-parietal brain regions. In response to increasing levels of cognitive demand, we found (i) poorer comprehension, (ii) slower reaction times, (iii) increasing levels of perceived mental effort, (iv) increasing levels of brain activity in the prefrontal cortex, (v) hierarchical modulation of core language processing regions that reflect increasingly higher-order auditory-linguistic processing, and (vi) a correlation between participants' mental effort ratings and their performance on the task. Our results demonstrate that listening effort is partly reliant on higher cognitive auditory attention and working memory mechanisms in the frontal lobe and partly reliant on hierarchical linguistic computation in the brain's left hemisphere. Further, listening effort is driven by a voluntary, motivation-based attention system for which our results validate the use of a single-item post-task questionnaire for measuring perceived levels of mental effort and predicting listening performance. We anticipate our study to be a starting point for more sophisticated models of listening effort and even cognitive neuroplasticity in hearing aid and cochlear implant users.
Collapse
Affiliation(s)
- Bradley E White
- Brain and Language Center for Neuroimaging, Gallaudet University, Washington, DC, USA.
| | - Clifton Langdon
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
41
|
Gabel LA, Voss K, Johnson E, Lindström ER, Truong DT, Murray EM, Cariño K, Nielsen CM, Paniagua S, Gruen JR. Identifying Dyslexia: Link between Maze Learning and Dyslexia Susceptibility Gene, DCDC2, in Young Children. Dev Neurosci 2021; 43:116-133. [PMID: 34186533 DOI: 10.1159/000516667] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 04/20/2021] [Indexed: 12/29/2022] Open
Abstract
Dyslexia is a common learning disability that affects processing of written language despite adequate intelligence and educational background. If learning disabilities remain untreated, a child may experience long-term social and emotional problems, which influence future success in all aspects of their life. Dyslexia has a 60% heritability rate, and genetic studies have identified multiple dyslexia susceptibility genes (DSGs). DSGs, such as DCDC2, are consistently associated with the risk and severity of reading disability (RD). Altered neural connectivity within temporoparietal regions of the brain is associated with specific variants of DSGs in individuals with RD. Genetically altering DSG expression in mice results in visual and auditory processing deficits as well as neurophysiological and neuroanatomical disruptions. Previously, we demonstrated that learning deficits associated with RD can be translated across species using virtual environments. In this 2-year longitudinal study, we demonstrate that performance on a virtual Hebb-Williams maze in pre-readers is able to predict future reading impairment, and the genetic risk strengthens, but is not dependent on, this relationship. Due to the lack of oral reporting and use of letters, this easy-to-use tool may be particularly valuable in a remote working environment as well as working with vulnerable populations such as English language learners.
Collapse
Affiliation(s)
- Lisa A Gabel
- Department of Psychology, Lafayette College, Easton, Pennsylvania, USA.,Program in Neuroscience, Lafayette College, Easton, Pennsylvania, USA
| | - Kelsey Voss
- Program in Neuroscience, Lafayette College, Easton, Pennsylvania, USA
| | - Evelyn Johnson
- Department of Special Education, Boise State University, Boise, Idaho, USA
| | - Esther R Lindström
- Department of Education and Human Services, Lehigh University, Bethlehem, Pennsylvania, USA
| | - Dongnhu T Truong
- Department of Pediatrics, Yale School of Medicine, New Haven, Connecticut, USA
| | - Erin M Murray
- Program in Neuroscience, Lafayette College, Easton, Pennsylvania, USA
| | - Karla Cariño
- Program in Neuroscience, Lafayette College, Easton, Pennsylvania, USA
| | - Christiana M Nielsen
- Department of Education and Human Services, Lehigh University, Bethlehem, Pennsylvania, USA
| | - Steven Paniagua
- Department of Genetics, Yale School of Medicine, New Haven, Connecticut, USA
| | - Jeffrey R Gruen
- Department of Pediatrics, Yale School of Medicine, New Haven, Connecticut, USA.,Department of Genetics, Yale School of Medicine, New Haven, Connecticut, USA
| |
Collapse
|
42
|
Jafari Z, Kolb BE, Mohajerani MH. Age-related hearing loss and cognitive decline: MRI and cellular evidence. Ann N Y Acad Sci 2021; 1500:17-33. [PMID: 34114212 DOI: 10.1111/nyas.14617] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 04/30/2021] [Accepted: 05/07/2021] [Indexed: 12/16/2022]
Abstract
Extensive evidence supports the association between age-related hearing loss (ARHL) and cognitive decline. It is, however, unknown whether a causal relationship exists between these two, or whether they both result from shared mechanisms. This paper intends to study this relationship through a comprehensive review of MRI findings as well as evidence of cellular alterations. Our review of structural MRI studies demonstrates that ARHL is independently linked to accelerated atrophy of total and regional brain volumes and reduced white matter integrity. Resting-state and task-based fMRI studies on ARHL also show changes in spontaneous neural activity and brain functional connectivity; and alterations in brain areas supporting auditory, language, cognitive, and affective processing independent of age, respectively. Although MRI findings support a causal relationship between ARHL and cognitive decline, the contribution of potential shared mechanisms should also be considered. In this regard, the review of cellular evidence indicates their role as possible common mechanisms underlying both age-related changes in hearing and cognition. Considering existing evidence, no single hypothesis can explain the link between ARHL and cognitive decline, and the contribution of both causal (i.e., the sensory hypothesis) and shared (i.e., the common cause hypothesis) mechanisms is expected.
Collapse
Affiliation(s)
- Zahra Jafari
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| |
Collapse
|
43
|
Hertrich I, Dietrich S, Blum C, Ackermann H. The Role of the Dorsolateral Prefrontal Cortex for Speech and Language Processing. Front Hum Neurosci 2021; 15:645209. [PMID: 34079444 PMCID: PMC8165195 DOI: 10.3389/fnhum.2021.645209] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 04/06/2021] [Indexed: 11/24/2022] Open
Abstract
This review article summarizes various functions of the dorsolateral prefrontal cortex (DLPFC) that are related to language processing. To this end, its connectivity with the left-dominant perisylvian language network was considered, as well as its interaction with other functional networks that, directly or indirectly, contribute to language processing. Language-related functions of the DLPFC comprise various aspects of pragmatic processing such as discourse management, integration of prosody, interpretation of nonliteral meanings, inference making, ambiguity resolution, and error repair. Neurophysiologically, the DLPFC seems to be a key region for implementing functional connectivity between the language network and other functional networks, including cortico-cortical as well as subcortical circuits. Considering clinical aspects, damage to the DLPFC causes psychiatric communication deficits rather than typical aphasic language syndromes. Although the number of well-controlled studies on DLPFC language functions is still limited, the DLPFC might be an important target region for the treatment of pragmatic language disorders.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Susanne Dietrich
- Evolutionary Cognition, Department of Psychology, University of Tübingen, Tübingen, Germany
| | - Corinna Blum
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Hermann Ackermann
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
44
|
Guediche S, de Bruin A, Caballero-Gaudes C, Baart M, Samuel AG. Second-language word recognition in noise: Interdependent neuromodulatory effects of semantic context and crosslinguistic interactions driven by word form similarity. Neuroimage 2021; 237:118168. [PMID: 34000398 DOI: 10.1016/j.neuroimage.2021.118168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/05/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022] Open
Abstract
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.
Collapse
Affiliation(s)
- Sara Guediche
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain.
| | | | | | - Martijn Baart
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, the Netherlands
| | - Arthur G Samuel
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Stony Brook University, NY 11794-2500, United States; Ikerbasque Foundation, Spain
| |
Collapse
|
45
|
Graessner A, Zaccarella E, Hartwigsen G. Differential contributions of left-hemispheric language regions to basic semantic composition. Brain Struct Funct 2021; 226:501-518. [PMID: 33515279 PMCID: PMC7910266 DOI: 10.1007/s00429-020-02196-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 12/16/2020] [Indexed: 02/08/2023]
Abstract
Semantic composition, the ability to combine single words to form complex meanings, is a core feature of human language. Despite growing interest in the basis of semantic composition, the neural correlates and the interaction of regions within this network remain a matter of debate. We designed a well-controlled two-word fMRI paradigm in which phrases only differed along the semantic dimension while keeping syntactic information alike. Healthy participants listened to meaningful ("fresh apple"), anomalous ("awake apple") and pseudoword phrases ("awake gufel") while performing an implicit and an explicit semantic task. We identified neural signatures for distinct processes during basic semantic composition. When lexical information is kept constant across conditions and the evaluation of phrasal plausibility is examined (meaningful vs. anomalous phrases), a small set of mostly left-hemispheric semantic regions, including the anterior part of the left angular gyrus, is found active. Conversely, when the load of lexical information-independently of phrasal plausibility-is varied (meaningful or anomalous vs. pseudoword phrases), conceptual combination involves a wide-spread left-hemispheric network comprising executive semantic control regions and general conceptual representation regions. Within this network, the functional coupling between the left anterior inferior frontal gyrus, the bilateral pre-supplementary motor area and the posterior angular gyrus specifically increases for meaningful phrases relative to pseudoword phrases. Stronger effects in the explicit task further suggest task-dependent neural recruitment. Overall, we provide a separation between distinct nodes of the semantic network, whose functional contributions depend on the type of compositional process under analysis.
Collapse
Affiliation(s)
- Astrid Graessner
- Lise-Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany.
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise-Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103, Leipzig, Germany
| |
Collapse
|
46
|
Dreyer FR, Doppelbauer L, Büscher V, Arndt V, Stahl B, Lucchese G, Hauk O, Mohr B, Pulvermüller F. Increased Recruitment of Domain-General Neural Networks in Language Processing Following Intensive Language-Action Therapy: fMRI Evidence From People With Chronic Aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:455-465. [PMID: 32830988 PMCID: PMC7613191 DOI: 10.1044/2020_ajslp-19-00150] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Purpose This study aimed to provide novel insights into the neural correlates of language improvement following intensive language-action therapy (ILAT; also known as constraint-induced aphasia therapy). Method Sixteen people with chronic aphasia underwent clinical aphasia assessment (Aachen Aphasia Test [AAT]), as well as functional magnetic resonance imaging (fMRI), both administered before (T1) and after ILAT (T2). The fMRI task included passive reading of single written words, with hashmark strings as visual baseline. Results Behavioral results indicated significant improvements of AAT scores across therapy, and fMRI results showed T2-T1 blood oxygenation-level-dependent (BOLD) signal change in the left precuneus to be modulated by the degree of AAT score increase. Subsequent region-of-interest analysis of this precuneus cluster confirmed a positive correlation of T2-T1 BOLD signal change and improvement on the clinical aphasia test. Similarly, the entire default mode network revealed a positive correlation between T2-T1 BOLD signal change and clinical language improvement. Conclusion These results are consistent with a more efficient recruitment of domain-general neural networks in language processing, including those involved in attentional control, following aphasia therapy with ILAT. Supplemental Material https://doi.org/10.23641/asha.12765755.
Collapse
Affiliation(s)
- Felix R. Dreyer
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany
- Cluster of Excellence Matters of Activity, Image Space Material, Humboldt Universität zu Berlin, Germany
| | - Lea Doppelbauer
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany
- Einstein Center for Neurosciences Berlin, Germany
- Berlin School of Mind and Brain, Humboldt University Berlin, Germany
| | - Verena Büscher
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany
| | - Verena Arndt
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany
| | - Benjamin Stahl
- Department of Neurology, University Medicine Greifswald, Germany
- Department of Neurology, Charité Universitätsmedizin Berlin, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Psychologische Hochschule Berlin, Germany
| | | | - Olaf Hauk
- Medical Research Council Cognition and Brain Science Unit, Cambridge, United Kingdom
| | - Bettina Mohr
- ZeNIS-Centre for Neuropsychology and Intensive Language Therapy, Berlin, Germany
- Department of Psychiatry, Charité Universitätsmedizin Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Germany
- Cluster of Excellence Matters of Activity, Image Space Material, Humboldt Universität zu Berlin, Germany
- Einstein Center for Neurosciences Berlin, Germany
- Berlin School of Mind and Brain, Humboldt University Berlin, Germany
| |
Collapse
|
47
|
Quillen IA, Yen M, Wilson SM. Distinct neural correlates of linguistic demand and non-linguistic demand. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:202-225. [PMID: 34585141 PMCID: PMC8475781 DOI: 10.1162/nol_a_00031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/27/2023]
Abstract
In this study, we investigated how the brain responds to task difficulty in linguistic and non-linguistic contexts. This is important for the interpretation of functional imaging studies of neuroplasticity in post-stroke aphasia, because of the inherent difficulty of matching or controlling task difficulty in studies with neurological populations. Twenty neurologically normal individuals were scanned with fMRI as they performed a linguistic task and a non-linguistic task, each of which had two levels of difficulty. Critically, the tasks were matched across domains (linguistic, non-linguistic) for accuracy and reaction time, such that the differences between the easy and difficult conditions were equivalent across domains. We found that non-linguistic demand modulated the same set of multiple demand (MD) regions that have been identified in many prior studies. In contrast, linguistic demand modulated MD regions to a much lesser extent, especially nodes belonging to the dorsal attention network. Linguistic demand modulated a subset of language regions, with the left inferior frontal gyrus most strongly modulated. The right hemisphere region homotopic to Broca's area was also modulated by linguistic but not non-linguistic demand. When linguistic demand was mapped relative to non-linguistic demand, we also observed domain by difficulty interactions in temporal language regions as well as a widespread bilateral semantic network. In sum, linguistic and non-linguistic demand have strikingly different neural correlates. These findings can be used to better interpret studies of patients recovering from aphasia. Some reported activations in these studies may reflect task performance differences, while others can be more confidently attributed to neuroplasticity.
Collapse
Affiliation(s)
- Ian A Quillen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Melodie Yen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
48
|
Holmes E, Zeidman P, Friston KJ, Griffiths TD. Difficulties with Speech-in-Noise Perception Related to Fundamental Grouping Processes in Auditory Cortex. Cereb Cortex 2020; 31:1582-1596. [PMID: 33136138 PMCID: PMC7869094 DOI: 10.1093/cercor/bhaa311] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 08/04/2020] [Accepted: 09/22/2020] [Indexed: 01/05/2023] Open
Abstract
In our everyday lives, we are often required to follow a conversation when background noise is present (“speech-in-noise” [SPIN] perception). SPIN perception varies widely—and people who are worse at SPIN perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with SPIN perception to difficulties with figure-ground perception using functional magnetic resonance imaging. We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when SPIN and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% rather than 90% performance)—consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (SPIN) tasks—which provides a common computational basis for the link between SPIN perception and fundamental auditory grouping.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Peter Zeidman
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK.,Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
49
|
Fitzhugh MC, Schaefer SY, Baxter LC, Rogalsky C. Cognitive and neural predictors of speech comprehension in noisy backgrounds in older adults. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:269-287. [PMID: 34250179 PMCID: PMC8261331 DOI: 10.1080/23273798.2020.1828946] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 09/18/2020] [Indexed: 06/13/2023]
Abstract
Older adults often experience difficulties comprehending speech in noisy backgrounds, which hearing loss does not fully explain. It remains unknown how cognitive abilities, brain networks, and age-related hearing loss may uniquely contribute to speech in noise comprehension at the sentence level. In 31 older adults, using cognitive measures and resting-state fMRI, we investigated the cognitive and neural predictors of speech comprehension with energetic (broadband noise) and informational masking (multi-speakers) effects. Better hearing thresholds and greater working memory abilities were associated with better speech comprehension with energetic masking. Conversely, faster processing speed and stronger functional connectivity between frontoparietal and language networks were associated with better speech comprehension with informational masking. Our findings highlight the importance of the frontoparietal network in older adults' ability to comprehend speech in multi-speaker backgrounds, and that hearing loss and working memory in older adults contributes to speech comprehension abilities related to energetic, but not informational masking.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA
- College of Health Solutions, Arizona State University, Tempe, AZ
| | - Sydney Y. Schaefer
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ
| | | | | |
Collapse
|
50
|
Rysop AU, Schmitt LM, Obleser J, Hartwigsen G. Neural modelling of the semantic predictability gain under challenging listening conditions. Hum Brain Mapp 2020; 42:110-127. [PMID: 32959939 PMCID: PMC7721236 DOI: 10.1002/hbm.25208] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 09/07/2020] [Accepted: 09/08/2020] [Indexed: 11/09/2022] Open
Abstract
When speech intelligibility is reduced, listeners exploit constraints posed by semantic context to facilitate comprehension. The left angular gyrus (AG) has been argued to drive this semantic predictability gain. Taking a network perspective, we ask how the connectivity within language-specific and domain-general networks flexibly adapts to the predictability and intelligibility of speech. During continuous functional magnetic resonance imaging (fMRI), participants repeated sentences, which varied in semantic predictability of the final word and in acoustic intelligibility. At the neural level, highly predictable sentences led to stronger activation of left-hemispheric semantic regions including subregions of the AG (PGa, PGp) and posterior middle temporal gyrus when speech became more intelligible. The behavioural predictability gain of single participants mapped onto the same regions but was complemented by increased activity in frontal and medial regions. Effective connectivity from PGa to PGp increased for more intelligible sentences. In contrast, inhibitory influence from pre-supplementary motor area to left insula was strongest when predictability and intelligibility of sentences were either lowest or highest. This interactive effect was negatively correlated with the behavioural predictability gain. Together, these results suggest that successful comprehension in noisy listening conditions relies on an interplay of semantic regions and concurrent inhibition of cognitive control regions when semantic cues are available.
Collapse
Affiliation(s)
- Anna Uta Rysop
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lea-Maria Schmitt
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|