1
|
Sobczak GG, Zhou X, Moore LE, Bolt DM, Litovsky RY. Cortical mechanisms of across-ear speech integration investigated using functional near-infrared spectroscopy (fNIRS). PLoS One 2024; 19:e0307158. [PMID: 39292701 PMCID: PMC11410267 DOI: 10.1371/journal.pone.0307158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 07/02/2024] [Indexed: 09/20/2024] Open
Abstract
This study aimed to investigate integration of alternating speech, a stimulus which classically produces a V-shaped speech intelligibility function with minimum at 2-6 Hz in typical-hearing (TH) listeners. We further studied how degraded speech impacts intelligibility across alternating rates (2, 4, 8, and 32 Hz) using vocoded speech, either in the right ear or bilaterally, to simulate single-sided deafness with a cochlear implant (SSD-CI) and bilateral CIs (BiCI), respectively. To assess potential cortical signatures of across-ear integration, we recorded activity in the bilateral auditory cortices (AC) and dorsolateral prefrontal cortices (DLPFC) during the task using functional near-infrared spectroscopy (fNIRS). For speech intelligibility, the V-shaped function was reproduced only in the BiCI condition; TH (with ceiling scores) and SSD-CI conditions had significantly higher scores across all alternating rates compared to the BiCI condition. For fNIRS, the AC and DLPFC exhibited significantly different activity across alternating rates in the TH condition, with altered activity patterns in both regions in the SSD-CI and BiCI conditions. Our results suggest that degraded speech inputs in one or both ears impact across-ear integration and that different listening strategies were employed for speech integration manifested as differences in cortical activity across conditions.
Collapse
Affiliation(s)
- Gabriel G Sobczak
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Xin Zhou
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Liberty E Moore
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Daniel M Bolt
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, United States of America
| |
Collapse
|
2
|
Brill-Weil SG, Kramer PF, Yanez A, Clever FH, Zhang R, Khaliq ZM. Presynaptic GABA A receptors control integration of nicotinic input onto dopaminergic axons in the striatum. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.25.600616. [PMID: 39372741 PMCID: PMC11451734 DOI: 10.1101/2024.06.25.600616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
Axons of dopaminergic neurons express gamma-aminobutyric acid type-A receptors (GABA A Rs) and nicotinic acetylcholine receptors (nAChRs) which are both independently positioned to shape striatal dopamine release. Using electrophysiology and calcium imaging, we investigated how interactions between GABA A Rs and nAChRs influence dopaminergic axon excitability. Direct axonal recordings showed that benzodiazepine application suppresses subthreshold axonal input from cholinergic interneurons (CINs). In imaging experiments, we used the first temporal derivative of presynaptic calcium signals to distinguish between direct- and nAChR-evoked activity in dopaminergic axons. We found that GABA A R antagonism with gabazine selectively enhanced nAChR-evoked axonal signals. Acetylcholine release was unchanged in gabazine suggesting that GABA A Rs located on dopaminergic axons, but not CINs, mediated this enhancement. Unexpectedly, we found that a widely used GABA A R antagonist, picrotoxin, inhibits axonal nAChRs and should be used cautiously for striatal circuit analysis. Overall, we demonstrate that GABA A Rs on dopaminergic axons regulate integration of nicotinic input to shape presynaptic excitability.
Collapse
|
3
|
Mori F, Sugino M, Kabashima K, Nara T, Jimbo Y, Kotani K. Limiting parameter range for cortical-spherical mapping improves activated domain estimation for attention modulated auditory response. J Neurosci Methods 2024; 402:110032. [PMID: 38043853 DOI: 10.1016/j.jneumeth.2023.110032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 11/21/2023] [Accepted: 11/29/2023] [Indexed: 12/05/2023]
Abstract
BACKGROUND Attention is one of the factors involved in selecting input information for the brain. We applied a method for estimating domains with clear boundaries using magnetoencephalography (the domain estimation method) for auditory-evoked responses (N100m) to evaluate the effects of attention in milliseconds. However, because the surface around the auditory cortex is folded in a complicated manner, it is unknown whether the activity in the auditory cortex can be estimated. NEW METHOD The parameter range to express current sources was set to include the auditory cortex. Their search region was expressed as a direct product of the parameter ranges used in the adaptive diagonal curves. RESULTS Without a limitation of the range, activity was estimated in regions other than the auditory cortex in all cases. However, with the limitation of the range, the activity was estimated in the primary or higher auditory cortex. Further analysis of the limitation of the range showed that the domains activated during attention included the regions activated during no attention for the participants whose amplitudes of N100m were higher during attention. COMPARISON WITH EXISTING METHOD We proposed a method for effectively limiting the search region to evaluate the extent of the activated domain in regions with complex folded structures. CONCLUSION To evaluate the extent of activated domains in regions with complex folded structures, it is necessary to limit the parameter search range. The area of the activated domains in the auditory cortex may increase by attention on the millisecond timescale.
Collapse
Affiliation(s)
- Fumina Mori
- School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Masato Sugino
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kenta Kabashima
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Takaaki Nara
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Yasuhiko Jimbo
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kiyoshi Kotani
- The Graduate School of Frontier Science, The University of Tokyo, Chiba, Japan
| |
Collapse
|
4
|
Wu L, Mei S, Yu S, Han S, Zhang YQ. Shank3 mutations enhance early neural responses to deviant tones in dogs. Cereb Cortex 2023; 33:10546-10557. [PMID: 37585733 DOI: 10.1093/cercor/bhad302] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/26/2023] [Accepted: 07/27/2023] [Indexed: 08/18/2023] Open
Abstract
Both enhanced discrimination of low-level features of auditory stimuli and mutations of SHANK3 (a gene that encodes a synaptic scaffolding protein) have been identified in autism spectrum disorder patients. However, experimental evidence regarding whether SHANK3 mutations lead to enhanced neural processing of low-level features of auditory stimuli is lacking. The present study investigated this possibility by examining effects of Shank3 mutations on early neural processing of pitch (tone frequency) in dogs. We recorded electrocorticograms from wild-type and Shank3 mutant dogs using an oddball paradigm in which deviant tones of different frequencies or probabilities were presented along with other tones in a repetitive stream (standards). We found that, relative to wild-type dogs, Shank3 mutant dogs exhibited larger amplitudes of early neural responses to deviant tones and greater sensitivity to variations of deviant frequencies within 100 ms after tone onsets. In addition, the enhanced early neural responses to deviant tones in Shank3 mutant dogs were observed independently of the probability of deviant tones. Our findings highlight an essential functional role of Shank3 in modulations of early neural detection of novel sounds and offer new insights into the genetic basis of the atypical auditory information processing in autism patients.
Collapse
Affiliation(s)
- Liang Wu
- State Key Laboratory for Molecular and Developmental Biology, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuting Mei
- School of Psychological and Cognitive Sciences, PKU-IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
| | - Shan Yu
- Brainnetome Center and State Key Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Shihui Han
- School of Psychological and Cognitive Sciences, PKU-IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
| | - Yong Q Zhang
- State Key Laboratory for Molecular and Developmental Biology, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing 100101, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
5
|
Liu S, Liu X, Chen S, Su F, Zhang B, Ke Y, Li J, Ming D. Neurophysiological markers of depression detection and severity prediction in first-episode major depressive disorder. J Affect Disord 2023; 331:8-16. [PMID: 36940824 DOI: 10.1016/j.jad.2023.03.038] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 03/02/2023] [Accepted: 03/15/2023] [Indexed: 03/23/2023]
Abstract
OBJECTIVE Deviant γ auditory steady-state responses (γ-ASSRs) have been documented in some psychiatric disorders. Nevertheless, the role of γ-ASSR in drug-naïve first-episode major depressive disorder (FEMD) patients remains equivocal. This study aimed to examine whether γ-ASSRs are impaired in FEMD patients and predict depression severity. METHODS Cortical reactivity was assessed in a cohort of 28 FEMD patients relative to 30 healthy control (HC) subjects during an ASSR paradigm randomly presented at 40 and 60 Hz. Event-related spectral perturbation and inter-trial phase coherence (ITC) were calculated to quantify dynamic changes of the γ-ASSR. Receiver operating characteristic curve combined with binary logistic regression were then employed to summarize ASSR variables that maximally differentiated groups. RESULTS FEMD patients exhibited significantly inferior 40 Hz-ASSR-ITC in the right hemisphere versus HC subjects (p = 0.007), along with attenuated θ-ITC that reflected underlying impairments in θ responses during 60 Hz clicks (p < 0.05). Moreover, the 40 Hz-ASSR-ITC and θ-ITC in the right hemisphere can be used as a combinational marker to detect FEMD patients with 84.0 % sensitivity and 81.5 % specificity (area under the curve was 0.868, 95 % CI: 0.768-0.968). Pearson's correlations between the depression severity and ASSR variables were further conducted. The symptom severity of FEMD patients was negatively correlated with 60 Hz-ASSR-ITC in the midline and right hemisphere, possibly indicating that depression severity mediated high γ neural synchrony. CONCLUSIONS Our findings provide critical insight into the pathological mechanism of FEMD, suggesting first that 40 Hz-ASSR-ITC and θ-ITC in right hemisphere constitute potential neurophysiological markers for early depression detection, and second, that high γ entrainment deficits may contribute to underlying symptom severity in FEMD patients.
Collapse
Affiliation(s)
- Shuang Liu
- Tianjin University, Academy of Medical Engineering and Translational Medicine, Tianjin, China
| | - Xiaoya Liu
- Tianjin University, Academy of Medical Engineering and Translational Medicine, Tianjin, China
| | - Sitong Chen
- Tianjin University, School of Precision Instruments and Optoelectronics Engineering, Tianjin, China
| | - Fangyue Su
- Tianjin University, School of Precision Instruments and Optoelectronics Engineering, Tianjin, China
| | - Bo Zhang
- Tianjin University, School of Precision Instruments and Optoelectronics Engineering, Tianjin, China
| | - Yufeng Ke
- Tianjin University, Academy of Medical Engineering and Translational Medicine, Tianjin, China
| | - Jie Li
- Tianjin Anding Hospital, Tianjin, China.
| | - Dong Ming
- Tianjin University, Academy of Medical Engineering and Translational Medicine, Tianjin, China; Tianjin University, School of Precision Instruments and Optoelectronics Engineering, Tianjin, China.
| |
Collapse
|
6
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
7
|
Mischler G, Keshishian M, Bickel S, Mehta AD, Mesgarani N. Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex. Neuroimage 2023; 266:119819. [PMID: 36529203 PMCID: PMC10510744 DOI: 10.1016/j.neuroimage.2022.119819] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/28/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Collapse
Affiliation(s)
- Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Menoua Keshishian
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Ashesh D Mehta
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States.
| |
Collapse
|
8
|
Axelrod V, Rozier C, Lehongre K, Adam C, Lambrecq V, Navarro V, Naccache L. Neural modulations in the auditory cortex during internal and external attention tasks: A single-patient intracranial recording study. Cortex 2022; 157:211-230. [PMID: 36335821 DOI: 10.1016/j.cortex.2022.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 05/12/2022] [Accepted: 09/27/2022] [Indexed: 12/15/2022]
Abstract
Brain sensory processing is not passive, but is rather modulated by our internal state. Different research methods such as non-invasive imaging methods and intracranial recording of the local field potential (LFP) have been used to study to what extent sensory processing and the auditory cortex in particular are modulated by selective attention. However, at the level of the single- or multi-units the selective attention in humans has not been tested. In addition, most previous research on selective attention has explored externally-oriented attention, but attention can be also directed inward (i.e., internal attention), like spontaneous self-generated thoughts and mind-wandering. In the present study we had a rare opportunity to record multi-unit activity (MUA) in the auditory cortex of a patient. To complement, we also analyzed the LFP signal of the macro-contact in the auditory cortex. Our experiment consisted of two conditions with periodic beeping sounds. The participants were asked either to count the beeps (i.e., an "external attention" condition) or to recall the events of the previous day (i.e., an "internal attention" condition). We found that the four out of seven recorded units in the auditory cortex showed increased firing rates in "external attention" compared to "internal attention" condition. The beginning of this attentional modulation varied across multi-units between 30-50 msec and 130-150 msec from stimulus onset, a result that is compatible with an early selection view. The LFP evoked potential and induced high gamma activity both showed attentional modulation starting at about 70-80 msec. As the control, for the same experiment we recorded MUA activity in the amygdala and hippocampus of two additional patients. No major attentional modulation was found in the control regions. Overall, we believe that our results provide new empirical information and support for existing theoretical views on selective attention and spontaneous self-generated cognition.
Collapse
Affiliation(s)
- Vadim Axelrod
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
| | - Camille Rozier
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute, ICM, INSERM U1127, CNRS UMR 7225, Paris, France
| | - Katia Lehongre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute, ICM, INSERM U1127, CNRS UMR 7225, Paris, France; Centre de NeuroImagerie de Recherche-CENIR, Paris Brain Institute, UMRS 1127, CNRS UMR 7225, Pitié-Salpêtriere Hospital, Paris, France
| | - Claude Adam
- AP-HP, GH Pitie-Salpêtrière-Charles Foix, Epilepsy Unit, Neurology Department, Paris, France
| | - Virginie Lambrecq
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute, ICM, INSERM U1127, CNRS UMR 7225, Paris, France; AP-HP, Groupe hospitalier Pitié-Salpêtrière, Department of Neurophysiology, Paris, France; Sorbonne Université, UMR S1127, Paris, France
| | - Vincent Navarro
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute, ICM, INSERM U1127, CNRS UMR 7225, Paris, France; AP-HP, GH Pitie-Salpêtrière-Charles Foix, Epilepsy Unit, Neurology Department, Paris, France; Sorbonne Université, UMR S1127, Paris, France
| | - Lionel Naccache
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute, ICM, INSERM U1127, CNRS UMR 7225, Paris, France; AP-HP, Groupe hospitalier Pitié-Salpêtrière, Department of Neurophysiology, Paris, France
| |
Collapse
|
9
|
Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention. Sci Rep 2022; 12:18789. [PMID: 36335137 PMCID: PMC9637225 DOI: 10.1038/s41598-022-22041-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.
Collapse
|
10
|
Curtis MT, Ren X, Coffman BA, Salisbury DF. Attentional M100 gain modulation localizes to auditory sensory cortex and is deficient in first-episode psychosis. Hum Brain Mapp 2022; 44:218-228. [PMID: 36073535 PMCID: PMC9783396 DOI: 10.1002/hbm.26067] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 08/22/2022] [Accepted: 08/23/2022] [Indexed: 02/05/2023] Open
Abstract
Selective attention is impaired in first-episode psychosis (FEP). Selective attention effects can be detected during auditory tasks as increased sensory activity. We previously reported electroencephalography scalp-measured N100 enhancement is reduced in FEP. Here, we localized magnetoencephalography (MEG) M100 source activity within the auditory cortex, making novel use of the Human Connectome Project multimodal parcellation (HCP-MMP) to identify precise auditory cortical areas involved in attention modulation and its impairment in FEP. MEG was recorded from 27 FEP and 31 matched healthy controls (HC) while individuals either ignored frequent standard and rare oddball tones while watching a silent movie or attended tones by pressing a button to oddballs. Because M100 arises mainly in the auditory cortices, MEG activity during the M100 interval was projected to the auditory sensory cortices defined by the HCP-MMP (A1, lateral belt, and parabelt parcels). FEP had less auditory sensory cortex M100 activity in both conditions. In addition, there was a significant interaction between group and attention. HC enhanced source activity with attention, but FEP did not. These results demonstrate deficits in both sensory processing and attentional modulation of the M100 in FEP. Novel use of the HCP-MMP revealed the precise cortical areas underlying attention modulation of auditory sensory activity in healthy individuals and impairments in FEP. The sensory reduction and attention modulation impairment indicate local and systems-level pathophysiology proximal to disease onset that may be critical for etiology. Further, M100 and N100 enhancement may serve as outcome variables for targeted intervention to improve attention in early psychosis.
Collapse
Affiliation(s)
- Mark T. Curtis
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Xi Ren
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Brian A. Coffman
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Dean F. Salisbury
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| |
Collapse
|
11
|
Morrill RJ, Bigelow J, DeKloe J, Hasenstaub AR. Audiovisual task switching rapidly modulates sound encoding in mouse auditory cortex. eLife 2022; 11:e75839. [PMID: 35980027 PMCID: PMC9427107 DOI: 10.7554/elife.75839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
In everyday behavior, sensory systems are in constant competition for attentional resources, but the cellular and circuit-level mechanisms of modality-selective attention remain largely uninvestigated. We conducted translaminar recordings in mouse auditory cortex (AC) during an audiovisual (AV) attention shifting task. Attending to sound elements in an AV stream reduced both pre-stimulus and stimulus-evoked spiking activity, primarily in deep-layer neurons and neurons without spectrotemporal tuning. Despite reduced spiking, stimulus decoder accuracy was preserved, suggesting improved sound encoding efficiency. Similarly, task-irrelevant mapping stimuli during inter-trial intervals evoked fewer spikes without impairing stimulus encoding, indicating that attentional modulation generalized beyond training stimuli. Importantly, spiking reductions predicted trial-to-trial behavioral accuracy during auditory attention, but not visual attention. Together, these findings suggest auditory attention facilitates sound discrimination by filtering sound-irrelevant background activity in AC, and that the deepest cortical layers serve as a hub for integrating extramodal contextual information.
Collapse
Affiliation(s)
- Ryan J Morrill
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - James Bigelow
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Jefferson DeKloe
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Andrea R Hasenstaub
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
12
|
Heald SLM, Van Hedger SC, Veillette J, Reis K, Snyder JS, Nusbaum HC. Going Beyond Rote Auditory Learning: Neural Patterns of Generalized Auditory Learning. J Cogn Neurosci 2022; 34:425-444. [PMID: 34942645 PMCID: PMC8832160 DOI: 10.1162/jocn_a_01805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic-phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest-posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1-P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1-P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity.
Collapse
|
13
|
Ylinen A, Wikman P, Leminen M, Alho K. Task-dependent cortical activations during selective attention to audiovisual speech. Brain Res 2022; 1775:147739. [PMID: 34843702 DOI: 10.1016/j.brainres.2021.147739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/21/2021] [Accepted: 11/21/2021] [Indexed: 11/28/2022]
Abstract
Selective listening to speech depends on widespread networks of the brain, but how the involvement of different neural systems in speech processing is affected by factors such as the task performed by a listener and speech intelligibility remains poorly understood. We used functional magnetic resonance imaging to systematically examine the effects that performing different tasks has on neural activations during selective attention to continuous audiovisual speech in the presence of task-irrelevant speech. Participants viewed audiovisual dialogues and attended either to the semantic or the phonological content of speech, or ignored speech altogether and performed a visual control task. The tasks were factorially combined with good and poor auditory and visual speech qualities. Selective attention to speech engaged superior temporal regions and the left inferior frontal gyrus regardless of the task. Frontoparietal regions implicated in selective auditory attention to simple sounds (e.g., tones, syllables) were not engaged by the semantic task, suggesting that this network may not be not as crucial when attending to continuous speech. The medial orbitofrontal cortex, implicated in social cognition, was most activated by the semantic task. Activity levels during the phonological task in the left prefrontal, premotor, and secondary somatosensory regions had a distinct temporal profile as well as the highest overall activity, possibly relating to the role of the dorsal speech processing stream in sub-lexical processing. Our results demonstrate that the task type influences neural activations during selective attention to speech, and emphasize the importance of ecologically valid experimental designs.
Collapse
Affiliation(s)
- Artturi Ylinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Neuroscience, Georgetown University, Washington D.C., USA
| | - Miika Leminen
- Analytics and Data Services, HUS Helsinki University Hospital, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
14
|
Kiremitçi I, Yilmaz Ö, Çelik E, Shahdloo M, Huth AG, Çukur T. Attentional Modulation of Hierarchical Speech Representations in a Multitalker Environment. Cereb Cortex 2021; 31:4986-5005. [PMID: 34115102 PMCID: PMC8491717 DOI: 10.1093/cercor/bhab136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 04/01/2021] [Accepted: 04/21/2021] [Indexed: 11/13/2022] Open
Abstract
Humans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.
Collapse
Affiliation(s)
- Ibrahim Kiremitçi
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Özgür Yilmaz
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
| | - Emin Çelik
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Mo Shahdloo
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, UK
| | - Alexander G Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Computer Science, The University of Texas at Austin, Austin, TX 78712, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| | - Tolga Çukur
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| |
Collapse
|
15
|
Wang L, Wu EX, Chen F. EEG-based auditory attention decoding using speech-level-based segmented computational models. J Neural Eng 2021; 18. [PMID: 33957606 DOI: 10.1088/1741-2552/abfeba] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 05/06/2021] [Indexed: 11/11/2022]
Abstract
Objective.Auditory attention in complex scenarios can be decoded by electroencephalography (EEG)-based cortical speech-envelope tracking. The relative root-mean-square (RMS) intensity is a valuable cue for the decomposition of speech into distinct characteristic segments. To improve auditory attention decoding (AAD) performance, this work proposed a novel segmented AAD approach to decode target speech envelopes from different RMS-level-based speech segments.Approach.Speech was decomposed into higher- and lower-RMS-level speech segments with a threshold of -10 dB relative RMS level. A support vector machine classifier was designed to identify higher- and lower-RMS-level speech segments, using clean target and mixed speech as reference signals based on corresponding EEG signals recorded when subjects listened to target auditory streams in competing two-speaker auditory scenes. Segmented computational models were developed with the classification results of higher- and lower-RMS-level speech segments. Speech envelopes were reconstructed based on segmented decoding models for either higher- or lower-RMS-level speech segments. AAD accuracies were calculated according to the correlations between actual and reconstructed speech envelopes. The performance of the proposed segmented AAD computational model was compared to those of traditional AAD methods with unified decoding functions.Main results.Higher- and lower-RMS-level speech segments in continuous sentences could be identified robustly with classification accuracies that approximated or exceeded 80% based on corresponding EEG signals at 6 dB, 3 dB, 0 dB, -3 dB and -6 dB signal-to-mask ratios (SMRs). Compared with unified AAD decoding methods, the proposed segmented AAD approach achieved more accurate results in the reconstruction of target speech envelopes and in the detection of attentional directions. Moreover, the proposed segmented decoding method had higher information transfer rates (ITRs) and shorter minimum expected switch times compared with the unified decoder.Significance.This study revealed that EEG signals may be used to classify higher- and lower-RMS-level-based speech segments across a wide range of SMR conditions (from 6 dB to -6 dB). A novel finding was that the specific information in different RMS-level-based speech segments facilitated EEG-based decoding of auditory attention. The significantly improved AAD accuracies and ITRs of the segmented decoding method suggests that this proposed computational model may be an effective method for the application of neuro-controlled brain-computer interfaces in complex auditory scenes.
Collapse
Affiliation(s)
- Lei Wang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, People's Republic of China.,Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, People's Republic of China
| | - Ed X Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, People's Republic of China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, People's Republic of China
| |
Collapse
|
16
|
Kohl C, Parviainen T, Jones SR. Neural Mechanisms Underlying Human Auditory Evoked Responses Revealed By Human Neocortical Neurosolver. Brain Topogr 2021; 35:19-35. [PMID: 33876329 PMCID: PMC8813713 DOI: 10.1007/s10548-021-00838-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 03/30/2021] [Indexed: 12/19/2022]
Abstract
Auditory evoked fields (AEFs) are commonly studied, yet their underlying neural mechanisms remain poorly understood. Here, we used the biophysical modelling software Human Neocortical Neurosolver (HNN) whose foundation is a canonical neocortical circuit model to interpret the cell and network mechanisms contributing to macroscale AEFs elicited by a simple tone, measured with magnetoencephalography. We found that AEFs can be reproduced by activating the neocortical circuit through a layer specific sequence of feedforward and feedback excitatory synaptic drives, similar to prior simulation of somatosensory evoked responses, supporting the notion that basic structures and activation patterns are preserved across sensory regions. We also applied the modeling framework to develop and test predictions on neural mechanisms underlying AEF differences in the left and right hemispheres, as well as in hemispheres contralateral and ipsilateral to the presentation of the auditory stimulus. We found that increasing the strength of the excitatory synaptic cortical feedback inputs to supragranular layers simulates the commonly observed right hemisphere dominance, while decreasing the input latencies and simultaneously increasing the number of cells contributing to the signal accounted for the contralateral dominance. These results provide a direct link between human data and prior animal studies and lay the foundation for future translational research examining the mechanisms underlying alteration in this fundamental biomarker of auditory processing in healthy cognition and neuropathology.
Collapse
Affiliation(s)
- Carmen Kohl
- Department of Neuroscience, Carney Institute for Brain Sciences, Brown University, Providence, USA.
| | - Tiina Parviainen
- Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, P.O. Box 35, 40014, Jyväskylä, Finland
- Meg Core Aalto Neuroimaging, Aalto University, AALTO, P.O. Box 15100, 00076, Espoo, Finland
| | - Stephanie R Jones
- Department of Neuroscience, Carney Institute for Brain Sciences, Brown University, Providence, USA
- Center for Neurorestoration and Neurotechnology, Providence VAMC, Providence, USA
| |
Collapse
|
17
|
Rocchi F, Ramachandran R. Foreground stimuli and task engagement enhance neuronal adaptation to background noise in the inferior colliculus of macaques. J Neurophysiol 2020; 124:1315-1326. [PMID: 32937088 DOI: 10.1152/jn.00153.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Auditory neuronal responses are modified by background noise. Inferior colliculus (IC) neuronal responses adapt to the most frequent sound level within an acoustic scene (adaptation to stimulus statistics), a mechanism that may preserve neuronal and behavioral thresholds for signal detection. However, it is still unclear whether the presence of foreground stimuli and/or task involvement can modify neuronal adaptation. To investigate how task engagement interacts with this mechanism, we compared the response of IC neurons to background noise, which caused adaptation to stimulus statistics, while macaque monkeys performed a masked tone detection task (task-driven condition) with responses recorded when the same background noise was presented alone (passive listening condition). In the task-dependent condition, monkeys performed a Go/No-Go task while 50-ms tones were embedded within an adaptation-inducing continuous background noise whose levels changed every 50 ms and were drawn from a probability distribution. The adaptation to noise stimulus statistics in IC neuronal responses was significantly enhanced in the task-driven condition compared with the passive listening condition, showing that foreground stimuli and/or task-engagement can modify IC neuronal responses. Additionally, the response of IC neurons to noise was significantly affected by the preceding sensory information (history effect) regardless of task involvement. These studies show that dynamic range adaptation in IC preserves behavioral and neurometric thresholds irrespective of noise type and a dependence of neuronal activity on task-related factors at subcortical levels of processing.NEW & NOTEWORTHY Auditory neuronal responses are influenced by maskers and distractors. However, it is still unclear whether the neuronal sensitivity to the masker stimulus is influenced by task-dependent factors. Our study represents one of the first attempts to investigate how task involvement influences the neural representation of background sounds in the subcortical, midbrain auditory neurons of behaving animals.
Collapse
Affiliation(s)
- Francesca Rocchi
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
18
|
Regev M, Simony E, Lee K, Tan KM, Chen J, Hasson U. Propagation of Information Along the Cortical Hierarchy as a Function of Attention While Reading and Listening to Stories. Cereb Cortex 2020; 29:4017-4034. [PMID: 30395174 DOI: 10.1093/cercor/bhy282] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 08/21/2018] [Accepted: 10/17/2018] [Indexed: 11/12/2022] Open
Abstract
How does attention route information from sensory to high-order areas as a function of task, within the relatively fixed topology of the brain? In this study, participants were simultaneously presented with 2 unrelated stories-one spoken and one written-and asked to attend one while ignoring the other. We used fMRI and a novel intersubject correlation analysis to track the spread of information along the processing hierarchy as a function of task. Processing the unattended spoken (written) information was confined to auditory (visual) cortices. In contrast, attending to the spoken (written) story enhanced the stimulus-selective responses in sensory regions and allowed it to spread into higher-order areas. Surprisingly, we found that the story-specific spoken (written) responses for the attended story also reached secondary visual (auditory) regions of the unattended sensory modality. These results demonstrate how attention enhances the processing of attended input and allows it to propagate across brain areas.
Collapse
Affiliation(s)
- Mor Regev
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.,Department of Psychology, Princeton University, Princeton, NJ, USA.,Montreal Neurological Institute, McGill University, Montréal, Quebec, Canada
| | - Erez Simony
- Faculty of Electrical Engineering, Holon Institute of Technology, Holon, Israel.,Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Katherine Lee
- Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ, USA
| | - Kean Ming Tan
- School of Statistics, University of Minnesota, Minneapolis, MN, USA
| | - Janice Chen
- Department of Psychology & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Uri Hasson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.,Department of Psychology, Princeton University, Princeton, NJ, USA
| |
Collapse
|
19
|
Whitten A, Key AP, Mefferd AS, Bodfish JW. Auditory event-related potentials index faster processing of natural speech but not synthetic speech over nonspeech analogs in children. BRAIN AND LANGUAGE 2020; 207:104825. [PMID: 32563764 DOI: 10.1016/j.bandl.2020.104825] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 05/29/2020] [Accepted: 05/30/2020] [Indexed: 06/11/2023]
Abstract
Given the crucial role of speech sounds in human language, it may be beneficial for speech to be supported by more efficient auditory and attentional neural processing mechanisms compared to nonspeech sounds. However, previous event-related potential (ERP) studies have found either no differences or slower auditory processing of speech than nonspeech, as well as inconsistent attentional processing. We hypothesized that this may be due to the use of synthetic stimuli in past experiments. The present study measured ERP responses during passive listening to both synthetic and natural speech and complexity-matched nonspeech analog sounds in 22 8-11-year-old children. We found that although children were more likely to show immature auditory ERP responses to the more complex natural stimuli, ERP latencies were significantly faster to natural speech compared to cow vocalizations, but were significantly slower to synthetic speech compared to tones. The attentional results indicated a P3a orienting response only to the cow sound, and we discuss potential methodological reasons for this. We conclude that our results support more efficient auditory processing of natural speech sounds in children, though more research with a wider array of stimuli will be necessary to confirm these results. Our results also highlight the importance of using natural stimuli in research investigating the neurobiology of language.
Collapse
Affiliation(s)
- Allison Whitten
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA.
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - Antje S Mefferd
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - James W Bodfish
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA; Vanderbilt Brain Institute, 6133 Medical Research Building III, 465 21st Avenue S., Nashville, TN, USA
| |
Collapse
|
20
|
Zempeltzi MM, Kisse M, Brunk MGK, Glemser C, Aksit S, Deane KE, Maurya S, Schneider L, Ohl FW, Deliano M, Happel MFK. Task rule and choice are reflected by layer-specific processing in rodent auditory cortical microcircuits. Commun Biol 2020; 3:345. [PMID: 32620808 PMCID: PMC7335110 DOI: 10.1038/s42003-020-1073-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Accepted: 06/11/2020] [Indexed: 01/16/2023] Open
Abstract
The primary auditory cortex (A1) is an essential, integrative node that encodes the behavioral relevance of acoustic stimuli, predictions, and auditory-guided decision-making. However, the realization of this integration with respect to the cortical microcircuitry is not well understood. Here, we characterize layer-specific, spatiotemporal synaptic population activity with chronic, laminar current source density analysis in Mongolian gerbils (Meriones unguiculatus) trained in an auditory decision-making Go/NoGo shuttle-box task. We demonstrate that not only sensory but also task- and choice-related information is represented in the mesoscopic neuronal population code of A1. Based on generalized linear-mixed effect models we found a layer-specific and multiplexed representation of the task rule, action selection, and the animal's behavioral options as accumulating evidence in preparation of correct choices. The findings expand our understanding of how individual layers contribute to the integrative circuit in the sensory cortex in order to code task-relevant information and guide sensory-based decision-making.
Collapse
Affiliation(s)
| | - Martin Kisse
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | | | - Claudia Glemser
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Sümeyra Aksit
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Katrina E Deane
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Shivam Maurya
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Lina Schneider
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
| | - Frank W Ohl
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
- Institute of Biology, Otto von Guericke University, D-39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), 39106, Magdeburg, Germany
| | | | - Max F K Happel
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany.
- Center for Behavioral Brain Sciences (CBBS), 39106, Magdeburg, Germany.
| |
Collapse
|
21
|
Mamashli F, Huang S, Khan S, Hämäläinen MS, Ahlfors SP, Ahveninen J. Distinct Regional Oscillatory Connectivity Patterns During Auditory Target and Novelty Processing. Brain Topogr 2020; 33:477-488. [PMID: 32441009 DOI: 10.1007/s10548-020-00776-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 05/12/2020] [Indexed: 11/26/2022]
Abstract
Auditory attention allows us to focus on relevant target sounds in the acoustic environment while maintaining the capability to orient to unpredictable (novel) sound changes. An open question is whether orienting to expected vs. unexpected auditory events are governed by anatomically distinct attention pathways, respectively, or by differing communication patterns within a common system. To address this question, we applied a recently developed PeSCAR analysis method to evaluate spectrotemporal functional connectivity patterns across subregions of broader cortical regions of interest (ROIs) to analyze magnetoencephalography data obtained during a cued auditory attention task. Subjects were instructed to detect a predictable harmonic target sound embedded among standard tones in one ear and to ignore the standard tones and occasional unpredictable novel sounds presented in the opposite ear. Phase coherence of estimated source activity was calculated between subregions of superior temporal, frontal, inferior parietal, and superior parietal cortex ROIs. Functional connectivity was stronger in response to target than novel stimuli between left superior temporal and left parietal ROIs and between left frontal and right parietal ROIs, with the largest effects observed in the beta band (15-35 Hz). In contrast, functional connectivity was stronger in response to novel than target stimuli in inter-hemispheric connections between left and right frontal ROIs, observed in early time windows in the alpha band (8-12 Hz). Our findings suggest that auditory processing of expected target vs. unexpected novel sounds involves different spatially, temporally, and spectrally distributed oscillatory connectivity patterns across temporal, parietal, and frontal areas.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA.
| | - Samantha Huang
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Sheraz Khan
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Matti S Hämäläinen
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Seppo P Ahlfors
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Jyrki Ahveninen
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| |
Collapse
|
22
|
Salmi J, Metwaly M, Tohka J, Alho K, Leppämäki S, Tani P, Koski A, Vanderwal T, Laine M. ADHD desynchronizes brain activity during watching a distracted multi-talker conversation. Neuroimage 2019; 216:116352. [PMID: 31730921 DOI: 10.1016/j.neuroimage.2019.116352] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 11/07/2019] [Accepted: 11/10/2019] [Indexed: 11/15/2022] Open
Abstract
Individuals with attention-deficit/hyperactivity disorder (ADHD) have difficulties navigating dynamic everyday situations that contain multiple sensory inputs that need to either be attended to or ignored. As conventional experimental tasks lack this type of everyday complexity, we administered a film-based multi-talker condition with auditory distractors in the background. ADHD-related aberrant brain responses to this naturalistic stimulus were identified using intersubject correlations (ISCs) in functional magnetic resonance imaging (fMRI) data collected from 51 adults with ADHD and 29 healthy controls. A novel permutation-based approach introducing studentized statistics and subject-wise voxel-level null-distributions revealed that several areas in cerebral attention networks and sensory cortices were desynchronized in participants with ADHD (n = 20) relative to healthy controls (n = 20). Specifically, desynchronization of the posterior parietal cortex occurred when irrelevant speech or music was presented in the background, but not when irrelevant white noise was presented, or when there were no distractors. We also show regionally distinct ISC signatures for inattention and impulsivity. Finally, post-scan recall of the film contents was associated with stronger ISCs in the default-mode network for the ADHD and in the dorsal attention network for healthy controls. The present study shows that ISCs can further our understanding of how a complex environment influences brain states in ADHD.
Collapse
Affiliation(s)
- Juha Salmi
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Rakentajanaukio 2, Espoo, Finland; Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland; Turku Institute for Advanced Studies, University of Turku, Turku, Finland; Department of Psychology, Åbo Akademi University, Turku, Finland; Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; AMI Centre, Aalto Neuroimaging, Aalto University, Espoo, Finland.
| | - Mostafa Metwaly
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Jussi Tohka
- A.I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; AMI Centre, Aalto Neuroimaging, Aalto University, Espoo, Finland
| | - Sami Leppämäki
- Department of Psychiatry, Helsinki University Hospital, Helsinki, Finland
| | - Pekka Tani
- Department of Psychiatry, Helsinki University Hospital, Helsinki, Finland
| | - Anniina Koski
- Department of Psychiatry, Helsinki University Hospital, Helsinki, Finland
| | - Tamara Vanderwal
- Department of Psychiatry, University of British Columbia, Vancouver, Canada
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland; Turku Brain and Mind Center, University of Turku, Turku, Finland
| |
Collapse
|
23
|
O'Sullivan J, Herrero J, Smith E, Schevon C, McKhann GM, Sheth SA, Mehta AD, Mesgarani N. Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception. Neuron 2019; 104:1195-1209.e3. [PMID: 31648900 DOI: 10.1016/j.neuron.2019.09.007] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/11/2019] [Accepted: 09/06/2019] [Indexed: 11/15/2022]
Abstract
Humans can easily focus on one speaker in a multi-talker acoustic environment, but how different areas of the human auditory cortex (AC) represent the acoustic components of mixed speech is unknown. We obtained invasive recordings from the primary and nonprimary AC in neurosurgical patients as they listened to multi-talker speech. We found that neural sites in the primary AC responded to individual speakers in the mixture and were relatively unchanged by attention. In contrast, neural sites in the nonprimary AC were less discerning of individual speakers but selectively represented the attended speaker. Moreover, the encoding of the attended speaker in the nonprimary AC was invariant to the degree of acoustic overlap with the unattended speaker. Finally, this emergent representation of attended speech in the nonprimary AC was linearly predictable from the primary AC responses. Our results reveal the neural computations underlying the hierarchical formation of auditory objects in human AC during multi-talker speech perception.
Collapse
Affiliation(s)
- James O'Sullivan
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | - Jose Herrero
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Elliot Smith
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, University of Utah, Salt Lake City, UT, USA
| | - Catherine Schevon
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Guy M McKhann
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Sameer A Sheth
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|
24
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
25
|
Adelhöfer N, Gohil K, Passow S, Beste C, Li SC. Lateral prefrontal anodal transcranial direct current stimulation augments resolution of auditory perceptual-attentional conflicts. Neuroimage 2019; 199:217-227. [DOI: 10.1016/j.neuroimage.2019.05.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 05/01/2019] [Accepted: 05/04/2019] [Indexed: 01/24/2023] Open
|
26
|
Ogg M, Carlson TA, Slevc LR. The Rapid Emergence of Auditory Object Representations in Cortex Reflect Central Acoustic Attributes. J Cogn Neurosci 2019; 32:111-123. [PMID: 31560265 DOI: 10.1162/jocn_a_01472] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.
Collapse
|
27
|
Kell AJE, McDermott JH. Invariance to background noise as a signature of non-primary auditory cortex. Nat Commun 2019; 10:3958. [PMID: 31477711 PMCID: PMC6718388 DOI: 10.1038/s41467-019-11710-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 07/30/2019] [Indexed: 12/22/2022] Open
Abstract
Despite well-established anatomical differences between primary and non-primary auditory cortex, the associated representational transformations have remained elusive. Here we show that primary and non-primary auditory cortex are differentiated by their invariance to real-world background noise. We measured fMRI responses to natural sounds presented in isolation and in real-world noise, quantifying invariance as the correlation between the two responses for individual voxels. Non-primary areas were substantially more noise-invariant than primary areas. This primary-nonprimary difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demanding visual task, suggesting that the observed invariance is not specific to speech processing and is robust to inattention. The difference was most pronounced for real-world background noise-both primary and non-primary areas were relatively robust to simple types of synthetic noise. Our results suggest a general representational transformation between auditory cortical stages, illustrating a representational consequence of hierarchical organization in the auditory system.
Collapse
Affiliation(s)
- Alexander J E Kell
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Zuckerman Institute of Mind, Brain, and Behavior, Columbia University, New York, NY, 10027, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA, USA.
| |
Collapse
|
28
|
Yakunina N, Tae WS, Kim SS, Nam EC. Functional MRI evidence of the cortico-olivary efferent pathway during active auditory target processing in humans. Hear Res 2019; 379:1-11. [DOI: 10.1016/j.heares.2019.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 04/11/2019] [Accepted: 04/16/2019] [Indexed: 01/14/2023]
|
29
|
Tsunada J, Cohen Y, Gold JI. Post-decision processing in primate prefrontal cortex influences subsequent choices on an auditory decision-making task. eLife 2019; 8:46770. [PMID: 31169495 PMCID: PMC6570479 DOI: 10.7554/elife.46770] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 06/05/2019] [Indexed: 01/11/2023] Open
Abstract
Perceptual decisions do not occur in isolation but instead reflect ongoing evaluation and adjustment processes that can affect future decisions. However, the neuronal substrates of these across-decision processes are not well understood, particularly for auditory decisions. We measured and manipulated the activity of choice-selective neurons in the ventrolateral prefrontal cortex (vlPFC) while monkeys made decisions about the frequency content of noisy auditory stimuli. As the decision was being formed, vlPFC activity was not modulated strongly by the task. However, after decision commitment, vlPFC population activity encoded the sensory evidence, choice, and outcome of the current trial and predicted subject-specific choice biases on the subsequent trial. Consistent with these patterns of neuronal activity, electrical microstimulation in vlPFC tended to affect the subsequent, but not current, decision. Thus, distributed post-commitment representations of graded decision-related information in prefrontal cortex can play a causal role in evaluating past decisions and biasing subsequent ones.
Collapse
Affiliation(s)
- Joji Tsunada
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, United States.,Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka, Japan
| | - Yale Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, United States.,Department of Neuroscience, University of Pennsylvania, Philadelphia, United States.,Department of Bioengineering, University of Pennsylvania, Philadelphia, United States
| | - Joshua I Gold
- Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
30
|
Salmela V, Salo E, Salmi J, Alho K. Spatiotemporal Dynamics of Attention Networks Revealed by Representational Similarity Analysis of EEG and fMRI. Cereb Cortex 2019; 28:549-560. [PMID: 27999122 DOI: 10.1093/cercor/bhw389] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2016] [Accepted: 12/01/2016] [Indexed: 12/12/2022] Open
Abstract
The fronto-parietal attention networks have been extensively studied with functional magnetic resonance imaging (fMRI), but spatiotemporal dynamics of these networks are not well understood. We measured event-related potentials (ERPs) with electroencephalography (EEG) and collected fMRI data from identical experiments where participants performed visual and auditory discrimination tasks separately or simultaneously and with or without distractors. To overcome the low temporal resolution of fMRI, we used a novel ERP-based application of multivariate representational similarity analysis (RSA) to parse time-averaged fMRI pattern activity into distinct spatial maps that each corresponded, in representational structure, to a short temporal ERP segment. Discriminant analysis of ERP-fMRI correlations revealed 8 cortical networks-2 sensory, 3 attention, and 3 other-segregated by 4 orthogonal, temporally multifaceted and spatially distributed functions. We interpret these functions as 4 spatiotemporal components of attention: modality-dependent and stimulus-driven orienting, top-down control, mode transition, and response preparation, selection and execution.
Collapse
Affiliation(s)
- V Salmela
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioral Sciences, University of Helsinki, FI-00014 Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo FI-00076, Finland
| | - E Salo
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioral Sciences, University of Helsinki, FI-00014 Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo FI-00076, Finland
| | - J Salmi
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioral Sciences, University of Helsinki, FI-00014 Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo FI-00076, Finland.,Faculty of Arts, Psychology and Theology, Åbo Akademi University, FI-20500 Turku, Finland
| | - K Alho
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioral Sciences, University of Helsinki, FI-00014 Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo FI-00076, Finland
| |
Collapse
|
31
|
Abstract
Most studies examining the neural underpinnings of music listening have no specific instruction on how to process the presented musical pieces. In this study, we explicitly manipulated the participants' focus of attention while they listened to the musical pieces. We used an ecologically valid experimental setting by presenting the musical stimuli simultaneously with naturalistic film sequences. In one condition, the participants were instructed to focus their attention on the musical piece (attentive listening), whereas in the second condition, the participants directed their attention to the film sequence (passive listening). We used two instrumental musical pieces: an electronic pop song, which was a major hit at the time of testing, and a classical musical piece. During music presentation, we measured electroencephalographic oscillations and responses from the autonomic nervous system (heart rate and high-frequency heart rate variability). During passive listening to the pop song, we found strong event-related synchronizations in all analyzed frequency bands (theta, lower alpha, upper alpha, lower beta, and upper beta). The neurophysiological responses during attentive listening to the pop song were similar to those of the classical musical piece during both listening conditions. Thus, the focus of attention had a strong influence on the neurophysiological responses to the pop song, but not on the responses to the classical musical piece. The electroencephalographic responses during passive listening to the pop song are interpreted as a neurophysiological and psychological state typically observed when the participants are 'drawn into the music'.
Collapse
|
32
|
Wikman P, Rinne T, Petkov CI. Reward cues readily direct monkeys' auditory performance resulting in broad auditory cortex modulation and interaction with sites along cholinergic and dopaminergic pathways. Sci Rep 2019; 9:3055. [PMID: 30816142 PMCID: PMC6395775 DOI: 10.1038/s41598-019-38833-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Accepted: 12/28/2018] [Indexed: 11/18/2022] Open
Abstract
In natural settings, the prospect of reward often influences the focus of our attention, but how cognitive and motivational systems influence sensory cortex is not well understood. Also, challenges in training nonhuman animals on cognitive tasks complicate cross-species comparisons and interpreting results on the neurobiological bases of cognition. Incentivized attention tasks could expedite training and evaluate the impact of attention on sensory cortex. Here we develop an Incentivized Attention Paradigm (IAP) and use it to show that macaque monkeys readily learn to use auditory or visual reward cues, drastically influencing their performance within a simple auditory task. Next, this paradigm was used with functional neuroimaging to measure activation modulation in the monkey auditory cortex. The results show modulation of extensive auditory cortical regions throughout primary and non-primary regions, which although a hallmark of attentional modulation in human auditory cortex, has not been studied or observed as broadly in prior data from nonhuman animals. Psycho-physiological interactions were identified between the observed auditory cortex effects and regions including basal forebrain sites along acetylcholinergic and dopaminergic pathways. The findings reveal the impact and regional interactions in the primate brain during an incentivized attention engaging auditory task.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, 00014, Helsinki, Finland.
| | - Teemu Rinne
- Turku Brain and Mind Center, Department of Clinical Medicine, University of Turku, 20014, Turku, Finland.
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, NE1 7RU, Newcastle upon Tyne, United Kingdom.
- Centre for Behaviour and Evolution, Newcastle University, NE1 7RU, Newcastle upon Tyne, United Kingdom.
| |
Collapse
|
33
|
Object-based attention in complex, naturalistic auditory streams. Sci Rep 2019; 9:2854. [PMID: 30814547 PMCID: PMC6393668 DOI: 10.1038/s41598-019-39166-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 01/14/2019] [Indexed: 11/08/2022] Open
Abstract
In vision, objects have been described as the 'units' on which non-spatial attention operates in many natural settings. Here, we test the idea of object-based attention in the auditory domain within ecologically valid auditory scenes, composed of two spatially and temporally overlapping sound streams (speech signal vs. environmental soundscapes in Experiment 1 and two speech signals in Experiment 2). Top-down attention was directed to one or the other auditory stream by a non-spatial cue. To test for high-level, object-based attention effects we introduce an auditory repetition detection task in which participants have to detect brief repetitions of auditory objects, ruling out any possible confounds with spatial or feature-based attention. The participants' responses were significantly faster and more accurate in the valid cue condition compared to the invalid cue condition, indicating a robust cue-validity effect of high-level, object-based auditory attention.
Collapse
|
34
|
Interaction of the effects associated with auditory-motor integration and attention-engaging listening tasks. Neuropsychologia 2019; 124:322-336. [PMID: 30444980 DOI: 10.1016/j.neuropsychologia.2018.11.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 11/08/2018] [Indexed: 11/22/2022]
Abstract
A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.
Collapse
|
35
|
Neural Variability Limits Adolescent Skill Learning. J Neurosci 2019; 39:2889-2902. [PMID: 30755494 DOI: 10.1523/jneurosci.2878-18.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 01/24/2019] [Accepted: 01/26/2019] [Indexed: 12/31/2022] Open
Abstract
Skill learning is fundamental to the acquisition of many complex behaviors that emerge during development. For example, years of practice give rise to perceptual improvements that contribute to mature speech and language skills. While fully honed learning skills might be thought to offer an advantage during the juvenile period, the ability to learn actually continues to develop through childhood and adolescence, suggesting that the neural mechanisms that support skill learning are slow to mature. To address this issue, we asked whether the rate and magnitude of perceptual learning varies as a function of age as male and female gerbils trained on an auditory task. Adolescents displayed a slower rate of perceptual learning compared with their young and mature counterparts. We recorded auditory cortical neuron activity from a subset of adolescent and adult gerbils as they underwent perceptual training. While training enhanced the sensitivity of most adult units, the sensitivity of many adolescent units remained unchanged, or even declined across training days. Therefore, the average rate of cortical improvement was significantly slower in adolescents compared with adults. Both smaller differences between sound-evoked response magnitudes and greater trial-to-trial response fluctuations contributed to the poorer sensitivity of individual adolescent neurons. Together, these findings suggest that elevated sensory neural variability limits adolescent skill learning.SIGNIFICANCE STATEMENT The ability to learn new skills emerges gradually as children age. This prolonged development, often lasting well into adolescence, suggests that children, teens, and adults may rely on distinct neural strategies to improve their sensory and motor capabilities. Here, we found that practice-based improvement on a sound detection task is slower in adolescent gerbils than in younger or older animals. Neural recordings made during training revealed that practice enhanced the sound sensitivity of adult cortical neurons, but had a weaker effect in adolescents. This latter finding was partially explained by the fact that adolescent neural responses were more variable than in adults. Our results suggest that one mechanistic basis of adult-like skill learning is a reduction in neural response variability.
Collapse
|
36
|
Mäntylä T, Nummenmaa L, Rikandi E, Lindgren M, Kieseppä T, Hari R, Suvisaari J, Raij TT. Aberrant Cortical Integration in First-Episode Psychosis During Natural Audiovisual Processing. Biol Psychiatry 2018; 84:655-664. [PMID: 29885763 DOI: 10.1016/j.biopsych.2018.04.014] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Revised: 04/16/2018] [Accepted: 04/22/2018] [Indexed: 01/13/2023]
Abstract
BACKGROUND Functional magnetic resonance imaging studies of psychotic disorders have reported both hypoactivity and hyperactivity in numerous brain regions. In line with the dysconnection hypothesis, these regions include cortical integrative hub regions. However, most earlier studies focused on a single cognitive function at a time, assessed by delivering artificial stimuli to patients with chronic psychosis. Thus, it remains unresolved whether these findings are present already in early psychosis and whether they translate to real-life-like conditions that require multisensory processing and integration. METHODS Scenes from the movie Alice in Wonderland (2010) were shown to 51 patients with first-episode psychosis (16 women) and 32 community-based control subjects (17 women) during 3T functional magnetic resonance imaging. We compared intersubject correlation, a measure of similarity of brain signal time courses in each voxel, between the groups. We also quantified the hubness as the number of connections each region has. RESULTS Intersubject correlation was significantly lower in patients with first-episode psychosis than in control subjects in the medial and lateral prefrontal, cingulate, precuneal, and parietotemporal regions, including the default mode network. Regional magnitude of between-group difference in intersubject correlation was associated with the hubness. CONCLUSIONS Our findings provide novel evidence for the dysconnection hypothesis by showing that during complex real-life-like stimulation, the most prominent functional alterations in psychotic disorders relate to integrative brain functions. Presence of such abnormalities in first-episode psychosis rules out long-term effects of illness or medication. These methods can be used in further studies to map widespread hub alterations in a single functional magnetic resonance imaging session and link them to potential downstream and upstream pathways.
Collapse
Affiliation(s)
- Teemu Mäntylä
- Mental Health Unit, National Institute for Health and Welfare, University of Helsinki, Helsinki University Hospital, Helsinki, Finland; Department of Psychology and Logopedics, University of Helsinki, Helsinki University Hospital, Helsinki, Finland; Department of Neuroscience and Biomedical Engineering and Advanced Magnetic Imaging Center, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland.
| | - Lauri Nummenmaa
- Department of Neuroscience and Biomedical Engineering and Advanced Magnetic Imaging Center, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland; Turku PET Centre and Department of Psychology, University of Turku, Turku, Finland
| | - Eva Rikandi
- Mental Health Unit, National Institute for Health and Welfare, University of Helsinki, Helsinki University Hospital, Helsinki, Finland; Department of Psychology and Logopedics, University of Helsinki, Helsinki University Hospital, Helsinki, Finland; Department of Neuroscience and Biomedical Engineering and Advanced Magnetic Imaging Center, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
| | - Maija Lindgren
- Mental Health Unit, National Institute for Health and Welfare, University of Helsinki, Helsinki University Hospital, Helsinki, Finland
| | - Tuula Kieseppä
- Mental Health Unit, National Institute for Health and Welfare, University of Helsinki, Helsinki University Hospital, Helsinki, Finland; Department of Psychiatry, University of Helsinki, Helsinki University Hospital, Helsinki, Finland
| | - Riitta Hari
- Department of Art, School of Arts, Design and Architecture, Aalto University, Helsinki, Finland; Department of Neuroscience and Biomedical Engineering and Advanced Magnetic Imaging Center, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
| | - Jaana Suvisaari
- Mental Health Unit, National Institute for Health and Welfare, University of Helsinki, Helsinki University Hospital, Helsinki, Finland
| | - Tuukka T Raij
- Department of Psychiatry, University of Helsinki, Helsinki University Hospital, Helsinki, Finland; Department of Neuroscience and Biomedical Engineering and Advanced Magnetic Imaging Center, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
| |
Collapse
|
37
|
Ruggles DR, Tausend AN, Shamma SA, Oxenham AJ. Cortical markers of auditory stream segregation revealed for streaming based on tonotopy but not pitch. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2424. [PMID: 30404514 PMCID: PMC6909992 DOI: 10.1121/1.5065392] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Revised: 10/05/2018] [Accepted: 10/08/2018] [Indexed: 06/08/2023]
Abstract
The brain decomposes mixtures of sounds, such as competing talkers, into perceptual streams that can be attended to individually. Attention can enhance the cortical representation of streams, but it is unknown what acoustic features the enhancement reflects, or where in the auditory pathways attentional enhancement is first observed. Here, behavioral measures of streaming were combined with simultaneous low- and high-frequency envelope-following responses (EFR) that are thought to originate primarily from cortical and subcortical regions, respectively. Repeating triplets of harmonic complex tones were presented with alternating fundamental frequencies. The tones were filtered to contain either low-numbered spectrally resolved harmonics, or only high-numbered unresolved harmonics. The behavioral results confirmed that segregation can be based on either tonotopic or pitch cues. The EFR results revealed no effects of streaming or attention on subcortical responses. Cortical responses revealed attentional enhancement under conditions of streaming, but only when tonotopic cues were available, not when streaming was based only on pitch cues. The results suggest that the attentional modulation of phase-locked responses is dominated by tonotopically tuned cortical neurons that are insensitive to pitch or periodicity cues.
Collapse
Affiliation(s)
- Dorea R Ruggles
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Alexis N Tausend
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Shihab A Shamma
- Electrical and Computer Engineering Department & Institute for Systems, University of Maryland, College Park, Maryland 20740, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
38
|
Hjortkjær J, Kassuba T, Madsen KH, Skov M, Siebner HR. Task-Modulated Cortical Representations of Natural Sound Source Categories. Cereb Cortex 2018; 28:295-306. [PMID: 29069292 DOI: 10.1093/cercor/bhx263] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI to measure cortical response patterns while human listeners categorized real-world sounds created by objects of different solid materials (glass, metal, wood) manipulated by different sound-producing actions (striking, rattling, dropping). In different sessions, subjects had to identify either material or action categories in the same sound stimuli. The sound-producing action and the material of the sound source could be decoded from multivoxel activity patterns in auditory cortex, including Heschl's gyrus and planum temporale. Importantly, decoding success depended on task relevance and category discriminability. Action categories were more accurately decoded in auditory cortex when subjects identified action information. Conversely, the material of the same sound sources was decoded with higher accuracy in the inferior frontal cortex during material identification. Representational similarity analyses indicated that both early and higher-order auditory cortex selectively enhanced spectrotemporal features relevant to the target category. Together, the results indicate a cortical selection mechanism that favors task-relevant information in the processing of nonvocal sound categories.
Collapse
Affiliation(s)
- Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Tanja Kassuba
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Martin Skov
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Decision Neuroscience Research Group, Copenhagen Business School, 2000 Frederiksberg, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Department of Neurology, Copenhagen University Hospital Bispebjerg, Copenhagen, 2400 København NV, Denmark
| |
Collapse
|
39
|
What's what in auditory cortices? Neuroimage 2018; 176:29-40. [DOI: 10.1016/j.neuroimage.2018.04.028] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 04/04/2018] [Accepted: 04/12/2018] [Indexed: 11/30/2022] Open
|
40
|
Salmi J, Salmela V, Salo E, Mikkola K, Leppämäki S, Tani P, Hokkanen L, Laasonen M, Numminen J, Alho K. Out of focus – Brain attention control deficits in adult ADHD. Brain Res 2018; 1692:12-22. [DOI: 10.1016/j.brainres.2018.04.019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 04/06/2018] [Accepted: 04/18/2018] [Indexed: 10/17/2022]
|
41
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
42
|
Overcoming Bias: Cognitive Control Reduces Susceptibility to Framing Effects in Evaluating Musical Performance. Sci Rep 2018; 8:6229. [PMID: 29670143 PMCID: PMC5906609 DOI: 10.1038/s41598-018-24528-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 04/04/2018] [Indexed: 11/17/2022] Open
Abstract
Prior expectations can bias evaluative judgments of sensory information. We show that information about a performer’s status can bias the evaluation of musical stimuli, reflected by differential activity of the ventromedial prefrontal cortex (vmPFC). Moreover, we demonstrate that decreased susceptibility to this confirmation bias is (a) accompanied by the recruitment of and (b) correlated with the white-matter structure of the executive control network, particularly related to the dorsolateral prefrontal cortex (dlPFC). By using long-duration musical stimuli, we were able to track the initial biasing, subsequent perception, and ultimate evaluation of the stimuli, examining the full evolution of these biases over time. Our findings confirm the persistence of confirmation bias effects even when ample opportunity exists to gather information about true stimulus quality, and underline the importance of executive control in reducing bias.
Collapse
|
43
|
Rinne T, Muers RS, Salo E, Slater H, Petkov CI. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences? Cereb Cortex 2018; 27:3471-3484. [PMID: 28419201 PMCID: PMC5654311 DOI: 10.1093/cercor/bhx092] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Indexed: 11/22/2022] Open
Abstract
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals.
Collapse
Affiliation(s)
- Teemu Rinne
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland
| | - Ross S Muers
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Emma Salo
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Heather Slater
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
44
|
Iacovella V, Faes L, Hasson U. Task-induced deactivation in diverse brain systems correlates with interindividual differences in distinct autonomic indices. Neuropsychologia 2018. [PMID: 29530799 DOI: 10.1016/j.neuropsychologia.2018.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Neuroimaging research has shown that different cognitive tasks induce relatively specific activation patterns, as well as less task-specific deactivation patterns. Here we examined whether individual differences in Autonomic Nervous System (ANS) activity during task performance correlate with the magnitude of task-induced deactivation. In an fMRI study, participants performed a continuous mental arithmetic task in a task/rest block design, while undergoing combined fMRI and heart/respiration rate acquisitions using photoplethysmograph and respiration belt. As expected, task performance increased heart-rate and reduced the RMSSD, a cardiac index related to vagal tone. Across participants, higher heart rate during task was linked to increased activation in fronto-parietal regions, as well as to stronger deactivation in ventromedial prefrontal regions. Respiration frequency during task was associated with similar patterns, but in different regions than those identified for heart-rate. Finally, in a large set of regions, almost exclusively limited to the Default Mode Network, lower RMSSD was associated with greater deactivation, and furthermore, the vast majority of these regions were task-deactivated at the group level. Together, our findings show that inter-individual differences in ANS activity are strongly linked to task-induced deactivation. Importantly, our findings suggest that deactivation is a multifaceted construct potentially linked to ANS control, because distinct ANS measures correlate with deactivation in different regions. We discuss the implications for current theories of cortical control of the ANS and for accounts of deactivation, with particular reference to studies documenting a "failure to deactivate" in multiple clinical states.
Collapse
Affiliation(s)
- Vittorio Iacovella
- Center for Mind/Brain Sciences, The University of Trento, Trento, Italy.
| | - Luca Faes
- BIOtech, Department of Industrial Engineering, University of Trento, Trento, Italy; IRCS PAT-FBK Trento, Italy
| | - Uri Hasson
- Center for Mind/Brain Sciences, The University of Trento, Trento, Italy; Center for Practical Wisdom, The University of Chicago, Chicago, USA
| |
Collapse
|
45
|
Riecke L, Peters JC, Valente G, Kemper VG, Formisano E, Sorger B. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Cereb Cortex 2018; 27:3002-3014. [PMID: 27230215 DOI: 10.1093/cercor/bhw160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands.,Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| |
Collapse
|
46
|
Häkkinen S, Rinne T. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex. Brain Struct Funct 2018; 223:2113-2127. [DOI: 10.1007/s00429-018-1612-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Accepted: 01/14/2018] [Indexed: 12/29/2022]
|
47
|
Chaves-Coira I, Rodrigo-Angulo ML, Nuñez A. Bilateral Pathways from the Basal Forebrain to Sensory Cortices May Contribute to Synchronous Sensory Processing. Front Neuroanat 2018; 12:5. [PMID: 29410616 PMCID: PMC5787133 DOI: 10.3389/fnana.2018.00005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Accepted: 01/08/2018] [Indexed: 01/10/2023] Open
Abstract
Sensory processing in the cortex should integrate inputs arriving from receptive fields located on both sides of the body. This role could be played by the corpus callosum through precise projections between both hemispheres. However, different studies suggest that cholinergic projections from the basal forebrain (BF) could also contribute to the synchronization and integration of cortical activities. Using tracer injections and optogenetic techniques in transgenic mice, we investigated whether the BF cells project bilaterally to sensory cortical areas, and have provided anatomical evidence to support a modulatory role for the cholinergic projections in sensory integration. Application of the retrograde tracer Fluor-Gold or Fast Blue in both hemispheres of the primary somatosensory (S1), auditory or visual cortical areas showed labeled neurons in the ipsi- and contralateral areas of the diagonal band of Broca and substantia innominata. The nucleus basalis magnocellularis only showed ipsilateral projections to the cortex. Optogenetic stimulation of the horizontal limb of the diagonal band of Broca facilitated whisker responses in the S1 cortex of both hemispheres through activation of muscarinic cholinergic receptors and this effect was diminished by atropine injection. In conclusion, our findings have revealed that specific areas of the BF project bilaterally to sensory cortices and may contribute to the coordination of neuronal activity on both hemispheres.
Collapse
Affiliation(s)
- Irene Chaves-Coira
- Department of Anatomy, Histology and Neuroscience, School of Medicine, Universidad Autonoma de Madrid, Madrid, Spain
| | - Margarita L Rodrigo-Angulo
- Department of Anatomy, Histology and Neuroscience, School of Medicine, Universidad Autonoma de Madrid, Madrid, Spain
| | - Angel Nuñez
- Department of Anatomy, Histology and Neuroscience, School of Medicine, Universidad Autonoma de Madrid, Madrid, Spain
| |
Collapse
|
48
|
Abstract
Most behaviors in mammals are directly or indirectly guided by prior experience and therefore depend on the ability of our brains to form memories. The ability to form an association between an initially possibly neutral sensory stimulus and its behavioral relevance is essential for our ability to navigate in a changing environment. The formation of a memory is a complex process involving many areas of the brain. In this chapter we review classic and recent work that has shed light on the specific contribution of sensory cortical areas to the formation of associative memories. We discuss synaptic and circuit mechanisms that mediate plastic adaptations of functional properties in individual neurons as well as larger neuronal populations forming topographically organized representations. Furthermore, we describe commonly used behavioral paradigms that are used to study the mechanisms of memory formation. We focus on the auditory modality that is receiving increasing attention for the study of associative memory in rodent model systems. We argue that sensory cortical areas may play an important role for the memory-dependent categorical recognition of previously encountered sensory stimuli.
Collapse
Affiliation(s)
- Dominik Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences (FTN), University Medical Center, Johannes Gutenberg University, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences (FTN), University Medical Center, Johannes Gutenberg University, Mainz, Germany.
| |
Collapse
|
49
|
Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture. J Neurosci 2017; 37:12187-12201. [PMID: 29109238 PMCID: PMC5729191 DOI: 10.1523/jneurosci.1436-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 10/04/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.
Collapse
|
50
|
Attentional Modulation of Envelope-Following Responses at Lower (93-109 Hz) but Not Higher (217-233 Hz) Modulation Rates. J Assoc Res Otolaryngol 2017; 19:83-97. [PMID: 28971333 PMCID: PMC5783923 DOI: 10.1007/s10162-017-0641-9] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 09/04/2017] [Indexed: 11/03/2022] Open
Abstract
Directing attention to sounds of different frequencies allows listeners to perceive a sound of interest, like a talker, in a mixture. Whether cortically generated frequency-specific attention affects responses as low as the auditory brainstem is currently unclear. Participants attended to either a high- or low-frequency tone stream, which was presented simultaneously and tagged with different amplitude modulation (AM) rates. In a replication design, we showed that envelope-following responses (EFRs) were modulated by attention only when the stimulus AM rate was slow enough for the auditory cortex to track—and not for stimuli with faster AM rates, which are thought to reflect ‘purer’ brainstem sources. Thus, we found no evidence of frequency-specific attentional modulation that can be confidently attributed to brainstem generators. The results demonstrate that different neural populations contribute to EFRs at higher and lower rates, compatible with cortical contributions at lower rates. The results further demonstrate that stimulus AM rate can alter conclusions of EFR studies.
Collapse
|