1
|
Abram SV, Hua JPY, Nicholas S, Roach B, Keedy S, Sweeney JA, Mathalon DH, Ford JM. Pons-to-Cerebellum Hypoconnectivity Along the Psychosis Spectrum and Associations With Sensory Prediction and Hallucinations in Schizophrenia. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2024; 9:693-702. [PMID: 38311290 PMCID: PMC11227403 DOI: 10.1016/j.bpsc.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 01/10/2024] [Accepted: 01/27/2024] [Indexed: 02/10/2024]
Abstract
BACKGROUND Sensory prediction allows the brain to anticipate and parse incoming self-generated sensory information from externally generated signals. Sensory prediction breakdowns may contribute to perceptual and agency abnormalities in psychosis (hallucinations, delusions). The pons, a central node in a cortico-ponto-cerebellar-thalamo-cortical circuit, is thought to support sensory prediction. Examination of pons connectivity in schizophrenia and its role in sensory prediction abnormalities is lacking. METHODS We examined these relationships using resting-state functional magnetic resonance imaging and the electroencephalography-based auditory N1 event-related potential in 143 participants with psychotic spectrum disorders (PSPs) (with schizophrenia, schizoaffective disorder, or bipolar disorder); 63 first-degree relatives of individuals with psychosis; 45 people at clinical high risk for psychosis; and 124 unaffected comparison participants. This unique sample allowed examination across the psychosis spectrum and illness trajectory. Seeding from the pons, we extracted average connectivity values from thalamic and cerebellar clusters showing differences between PSPs and unaffected comparison participants. We predicted N1 amplitude attenuation during a vocalization task from pons connectivity and group membership. We correlated participant-level connectivity in PSPs and people at clinical high risk for psychosis with hallucination and delusion severity. RESULTS Compared to unaffected comparison participants, PSPs showed pons hypoconnectivity to 2 cerebellar clusters, and first-degree relatives of individuals with psychosis showed hypoconnectivity to 1 of these clusters. Pons-to-cerebellum connectivity was positively correlated with N1 attenuation; only PSPs with heightened pons-to-postcentral gyrus connectivity showed this pattern, suggesting a possible compensatory mechanism. Pons-to-cerebellum hypoconnectivity was correlated with greater hallucination severity specifically among PSPs with schizophrenia. CONCLUSIONS Deficient pons-to-cerebellum connectivity linked sensory prediction network breakdowns with perceptual abnormalities in schizophrenia. Findings highlight shared features and clinical heterogeneity across the psychosis spectrum.
Collapse
Affiliation(s)
- Samantha V Abram
- Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, California; San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Jessica P Y Hua
- Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, California; San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Spero Nicholas
- San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Brian Roach
- San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Sarah Keedy
- Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, Illinois
| | - John A Sweeney
- Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati, Cincinnati, Ohio
| | - Daniel H Mathalon
- Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, California; San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Judith M Ford
- Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, California; San Francisco Veterans Affairs Health Care System, San Francisco, California.
| |
Collapse
|
2
|
Ozker M, Yu L, Dugan P, Doyle W, Friedman D, Devinsky O, Flinker A. Speech-induced suppression and vocal feedback sensitivity in human cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.08.570736. [PMID: 38370843 PMCID: PMC10871232 DOI: 10.1101/2023.12.08.570736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
Collapse
Affiliation(s)
- Muge Ozker
- Neurology Department, New York University, New York, 10016, NY, USA
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Leyao Yu
- Neurology Department, New York University, New York, 10016, NY, USA
- Biomedical Engineering Department, New York University, Brooklyn, 11201, NY, USA
| | - Patricia Dugan
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, 10016, NY, USA
| | - Daniel Friedman
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Orrin Devinsky
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Adeen Flinker
- Neurology Department, New York University, New York, 10016, NY, USA
- Biomedical Engineering Department, New York University, Brooklyn, 11201, NY, USA
| |
Collapse
|
3
|
Beach SD, Tang DL, Kiran S, Niziolek CA. Pars Opercularis Underlies Efferent Predictions and Successful Auditory Feedback Processing in Speech: Evidence From Left-Hemisphere Stroke. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:454-483. [PMID: 38911464 PMCID: PMC11192514 DOI: 10.1162/nol_a_00139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/07/2024] [Indexed: 06/25/2024]
Abstract
Hearing one's own speech allows for acoustic self-monitoring in real time. Left-hemisphere motor planning regions are thought to give rise to efferent predictions that can be compared to true feedback in sensory cortices, resulting in neural suppression commensurate with the degree of overlap between predicted and actual sensations. Sensory prediction errors thus serve as a possible mechanism of detection of deviant speech sounds, which can then feed back into corrective action, allowing for online control of speech acoustics. The goal of this study was to assess the integrity of this detection-correction circuit in persons with aphasia (PWA) whose left-hemisphere lesions may limit their ability to control variability in speech output. We recorded magnetoencephalography (MEG) while 15 PWA and age-matched controls spoke monosyllabic words and listened to playback of their utterances. From this, we measured speaking-induced suppression of the M100 neural response and related it to lesion profiles and speech behavior. Both speaking-induced suppression and cortical sensitivity to deviance were preserved at the group level in PWA. PWA with more spared tissue in pars opercularis had greater left-hemisphere neural suppression and greater behavioral correction of acoustically deviant pronunciations, whereas sparing of superior temporal gyrus was not related to neural suppression or acoustic behavior. In turn, PWA who made greater corrections had fewer overt speech errors in the MEG task. Thus, the motor planning regions that generate the efferent prediction are integral to performing corrections when that prediction is violated.
Collapse
Affiliation(s)
| | - Ding-lan Tang
- Waisman Center, The University of Wisconsin–Madison
- Academic Unit of Human Communication, Development, and Information Sciences, University of Hong Kong, Hong Kong, SAR China
| | - Swathi Kiran
- Department of Speech, Language & Hearing Sciences, Boston University
| | - Caroline A. Niziolek
- Waisman Center, The University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, The University of Wisconsin–Madison
| |
Collapse
|
4
|
Kent RD. The Feel of Speech: Multisystem and Polymodal Somatosensation in Speech Production. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1424-1460. [PMID: 38593006 DOI: 10.1044/2024_jslhr-23-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.
Collapse
|
5
|
Tsunada J, Wang X, Eliades SJ. Multiple processes of vocal sensory-motor interaction in primate auditory cortex. Nat Commun 2024; 15:3093. [PMID: 38600118 PMCID: PMC11006904 DOI: 10.1038/s41467-024-47510-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.
Collapse
Affiliation(s)
- Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Chinese Institute for Brain Research, Beijing, China
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
6
|
Wang H, Ali Y, Max L. Perceptual formant discrimination during speech movement planning. PLoS One 2024; 19:e0301514. [PMID: 38564597 PMCID: PMC10986972 DOI: 10.1371/journal.pone.0301514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 03/15/2024] [Indexed: 04/04/2024] Open
Abstract
Evoked potential studies have shown that speech planning modulates auditory cortical responses. The phenomenon's functional relevance is unknown. We tested whether, during this time window of cortical auditory modulation, there is an effect on speakers' perceptual sensitivity for vowel formant discrimination. Participants made same/different judgments for pairs of stimuli consisting of a pre-recorded, self-produced vowel and a formant-shifted version of the same production. Stimuli were presented prior to a "go" signal for speaking, prior to passive listening, and during silent reading. The formant discrimination stimulus /uh/ was tested with a congruent productions list (words with /uh/) and an incongruent productions list (words without /uh/). Logistic curves were fitted to participants' responses, and the just-noticeable difference (JND) served as a measure of discrimination sensitivity. We found a statistically significant effect of condition (worst discrimination before speaking) without congruency effect. Post-hoc pairwise comparisons revealed that JND was significantly greater before speaking than during silent reading. Thus, formant discrimination sensitivity was reduced during speech planning regardless of the congruence between discrimination stimulus and predicted acoustic consequences of the planned speech movements. This finding may inform ongoing efforts to determine the functional relevance of the previously reported modulation of auditory processing during speech planning.
Collapse
Affiliation(s)
- Hantao Wang
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| | - Yusuf Ali
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| | - Ludo Max
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
7
|
Tsunada J, Eliades SJ. Frontal-Auditory Cortical Interactions and Sensory Prediction During Vocal Production in Marmoset Monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577656. [PMID: 38352422 PMCID: PMC10862695 DOI: 10.1101/2024.01.28.577656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
The control of speech and vocal production involves the calculation of error between the intended vocal output and the resulting auditory feedback. Consistent with this model, recent evidence has demonstrated that the auditory cortex is suppressed immediately before and during vocal production, yet is still sensitive to differences between vocal output and altered auditory feedback. This suppression has been suggested to be the result of top-down signals containing information about the intended vocal output, potentially originating from motor or other frontal cortical areas. However, whether such frontal areas are the source of suppressive and predictive signaling to the auditory cortex during vocalization is unknown. Here, we simultaneously recorded neural activity from both the auditory and frontal cortices of marmoset monkeys while they produced self-initiated vocalizations. We found increases in neural activity in both brain areas preceding the onset of vocal production, notably changes in both multi-unit activity and local field potential theta-band power. Connectivity analysis using Granger causality demonstrated that frontal cortex sends directed signaling to the auditory cortex during this pre-vocal period. Importantly, this pre-vocal activity predicted both vocalization-induced suppression of the auditory cortex as well as the acoustics of subsequent vocalizations. These results suggest that frontal cortical areas communicate with the auditory cortex preceding vocal production, with frontal-auditory signals that may reflect the transmission of sensory prediction information. This interaction between frontal and auditory cortices may contribute to mechanisms that calculate errors between intended and actual vocal outputs during vocal communication.
Collapse
Affiliation(s)
- Joji Tsunada
- Chinese Institute for Brain Research, Beijing, China
- Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka, Iwate, Japan
| | - Steven J. Eliades
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
8
|
Wang H, Ali Y, Max L. Perceptual formant discrimination during speech movement planning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.11.561423. [PMID: 37873157 PMCID: PMC10592784 DOI: 10.1101/2023.10.11.561423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Evoked potential studies have shown that speech planning modulates auditory cortical responses. The phenomenon's functional relevance is unknown. We tested whether, during this time window of cortical auditory modulation, there is an effect on speakers' perceptual sensitivity for vowel formant discrimination. Participants made same/different judgments for pairs of stimuli consisting of a pre-recorded, self-produced vowel and a formant-shifted version of the same production. Stimuli were presented prior to a "go" signal for speaking, prior to passive listening, and during silent reading. The formant discrimination stimulus /uh/ was tested with a congruent productions list (words with /uh/) and an incongruent productions list (words without /uh/). Logistic curves were fitted to participants' responses, and the just-noticeable difference (JND) served as a measure of discrimination sensitivity. We found a statistically significant effect of condition (worst discrimination before speaking) without congruency effect. Post-hoc pairwise comparisons revealed that JND was significantly greater before speaking than during silent reading. Thus, formant discrimination sensitivity was reduced during speech planning regardless of the congruence between discrimination stimulus and predicted acoustic consequences of the planned speech movements. This finding may inform ongoing efforts to determine the functional relevance of the previously reported modulation of auditory processing during speech planning.
Collapse
Affiliation(s)
- Hantao Wang
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| | - Yusuf Ali
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| | - Ludo Max
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
9
|
Eliades SJ, Tsunada J. Effects of Cortical Stimulation on Feedback-Dependent Vocal Control in Non-Human Primates. Laryngoscope 2023; 133 Suppl 2:S1-S10. [PMID: 35538859 PMCID: PMC9649833 DOI: 10.1002/lary.30175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 04/16/2022] [Accepted: 04/24/2022] [Indexed: 11/07/2022]
Abstract
OBJECTIVES Hearing plays an important role in our ability to control voice, and perturbations in auditory feedback result in compensatory changes in vocal production. The auditory cortex (AC) has been proposed as an important mediator of this behavior, but causal evidence is lacking. We tested this in an animal model, hypothesizing that AC is necessary for vocal self-monitoring and feedback-dependent control, and that altering activity in AC during vocalization will interfere with vocal control. METHODS We implanted two marmoset monkeys (Callithrix jacchus) with bilateral AC electrode arrays. Acoustic signals were recorded from vocalizing marmosets while altering vocal feedback or electrically stimulating AC during random subsets of vocalizations. Feedback was altered by real-time frequency shifts and presented through headphones and electrical stimulation delivered to individual electrodes. We analyzed recordings to measure changes in vocal acoustics during shifted feedback and stimulation, and to determine their interaction. Results were correlated with the location and frequency tuning of stimulation sites. RESULTS Consistent with previous results, we found electrical stimulation alone evoked changes in vocal production. Results were stronger in the right hemisphere, but decreased with lower currents or repeated stimulation. Simultaneous stimulation and shifted feedback significantly altered vocal control for a subset of sites, decreasing feedback compensation at some and increasing it at others. Inhibited compensation was more likely at sites closer to vocal frequencies. CONCLUSIONS Results provide causal evidence that the AC is involved in feedback-dependent vocal control, and that it is sufficient and may also be necessary to drive changes in vocal production. LEVEL OF EVIDENCE N/A Laryngoscope, 133:1-10, 2023.
Collapse
Affiliation(s)
- Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
10
|
Johari K, Kelley RM, Tjaden K, Patterson CG, Rohl AH, Berger JI, Corcos DM, Greenlee JDW. Human subthalamic nucleus neurons differentially encode speech and limb movement. Front Hum Neurosci 2023; 17:962909. [PMID: 36875233 PMCID: PMC9983637 DOI: 10.3389/fnhum.2023.962909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 01/25/2023] [Indexed: 02/19/2023] Open
Abstract
Deep brain stimulation (DBS) of the subthalamic nucleus (STN), which consistently improves limb motor functions, shows mixed effects on speech functions in Parkinson's disease (PD). One possible explanation for this discrepancy is that STN neurons may differentially encode speech and limb movement. However, this hypothesis has not yet been tested. We examined how STN is modulated by limb movement and speech by recording 69 single- and multi-unit neuronal clusters in 12 intraoperative PD patients. Our findings indicated: (1) diverse patterns of modulation in neuronal firing rates in STN for speech and limb movement; (2) a higher number of STN neurons were modulated by speech vs. limb movement; (3) an overall increase in neuronal firing rates for speech vs. limb movement; and (4) participants with longer disease duration had higher firing rates. These data provide new insights into the role of STN neurons in speech and limb movement.
Collapse
Affiliation(s)
- Karim Johari
- Human Neurophysiology and Neuromodulation Lab, Department of Communication Science and Disorders, Louisiana State University, Baton Rouge, LA, United States.,Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Ryan M Kelley
- Medical Scientist Training Program, The University of Iowa, Iowa City, IA, United States.,Program in Neuroscience, The University of Iowa, Iowa City, IA, United States
| | - Kris Tjaden
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, United States
| | - Charity G Patterson
- Department of Physical Therapy, University of Pittsburgh, Pittsburgh, PA, United States
| | - Andrea H Rohl
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Joel I Berger
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Daniel M Corcos
- Department of Physical Therapy & Human Movement Sciences, Northwestern University, Chicago, IL, United States
| | - Jeremy D W Greenlee
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States.,Program in Neuroscience, The University of Iowa, Iowa City, IA, United States.,Iowa Neuroscience Institute, Iowa City, IA, United States
| |
Collapse
|
11
|
Griffiths O, Jack BN, Pearson D, Elijah R, Mifsud N, Han N, Libesman S, Rita Barreiros A, Turnbull L, Balzan R, Le Pelley M, Harris A, Whitford TJ. Disrupted auditory N1, theta power and coherence suppression to willed speech in people with schizophrenia. Neuroimage Clin 2023; 37:103290. [PMID: 36535137 PMCID: PMC9792888 DOI: 10.1016/j.nicl.2022.103290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 11/17/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
The phenomenon of sensory self-suppression - also known as sensory attenuation - occurs when a person generates a perceptible stimulus (such as a sound) by performing an action (such as speaking). The sensorimotor control system is thought to actively predict and then suppress the vocal sound in the course of speaking, resulting in lowered cortical responsiveness when speaking than when passively listening to an identical sound. It has been hypothesized that auditory hallucinations in schizophrenia result from a reduction in self-suppression due to a disruption of predictive mechanisms required to anticipate and suppress a specific, self-generated sound. It has further been hypothesized that this suppression is evident primarily in theta band activity. Fifty-one people, half of whom had a diagnosis of schizophrenia, were asked to repeatedly utter a single syllable, which was played back to them concurrently over headphones while EEG was continuously recorded. In other conditions, recordings of the same spoken syllables were played back to participants while they passively listened, or were played back with their onsets preceded by a visual cue. All participants experienced these conditions with their voice artificially shifted in pitch and also with their unaltered voice. Suppression was measured using event-related potentials (N1 component), theta phase coherence and power. We found that suppression was generally reduced on all metrics in the patient sample, and when voice alteration was applied. We additionally observed reduced theta coherence and power in the patient sample across all conditions. Visual cueing affected theta coherence only. In aggregate, the results suggest that sensory self-suppression of theta power and coherence is disrupted in schizophrenia.
Collapse
Affiliation(s)
- Oren Griffiths
- College of Education, Psychology and Social Work, Flinders University, Adelaide, Australia; Flinders Institute for Mental Health and Wellbeing, Adelaide, Australia.
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Canberra, Australia
| | | | - Ruth Elijah
- School of Psychology, UNSW Sydney, Sydney, Australia
| | - Nathan Mifsud
- School of Psychology, UNSW Sydney, Sydney, Australia
| | - Nathan Han
- School of Psychology, UNSW Sydney, Sydney, Australia
| | - Sol Libesman
- School of Psychology, UNSW Sydney, Sydney, Australia
| | - Ana Rita Barreiros
- Specialty of Psychiatry, The University of Sydney, Faculty of Medicine and Health, The University of Sydney, Australia; Brain Dynamics Centre, The Westmead Institute for Medical Research, The University of Sydney, Sydney, Australia
| | - Luke Turnbull
- College of Education, Psychology and Social Work, Flinders University, Adelaide, Australia
| | - Ryan Balzan
- College of Education, Psychology and Social Work, Flinders University, Adelaide, Australia; Flinders Institute for Mental Health and Wellbeing, Adelaide, Australia
| | | | - Anthony Harris
- Specialty of Psychiatry, The University of Sydney, Faculty of Medicine and Health, The University of Sydney, Australia; Brain Dynamics Centre, The Westmead Institute for Medical Research, The University of Sydney, Sydney, Australia
| | - Thomas J Whitford
- Specialty of Psychiatry, The University of Sydney, Faculty of Medicine and Health, The University of Sydney, Australia
| |
Collapse
|
12
|
Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention. Sci Rep 2022; 12:18789. [PMID: 36335137 PMCID: PMC9637225 DOI: 10.1038/s41598-022-22041-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.
Collapse
|
13
|
Lohse M, Zimmer-Harwood P, Dahmen JC, King AJ. Integration of somatosensory and motor-related information in the auditory system. Front Neurosci 2022; 16:1010211. [PMID: 36330342 PMCID: PMC9622781 DOI: 10.3389/fnins.2022.1010211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 09/28/2022] [Indexed: 11/30/2022] Open
Abstract
An ability to integrate information provided by different sensory modalities is a fundamental feature of neurons in many brain areas. Because visual and auditory inputs often originate from the same external object, which may be located some distance away from the observer, the synthesis of these cues can improve localization accuracy and speed up behavioral responses. By contrast, multisensory interactions occurring close to the body typically involve a combination of tactile stimuli with other sensory modalities. Moreover, most activities involving active touch generate sound, indicating that stimuli in these modalities are frequently experienced together. In this review, we examine the basis for determining sound-source distance and the contribution of auditory inputs to the neural encoding of space around the body. We then consider the perceptual consequences of combining auditory and tactile inputs in humans and discuss recent evidence from animal studies demonstrating how cortical and subcortical areas work together to mediate communication between these senses. This research has shown that somatosensory inputs interface with and modulate sound processing at multiple levels of the auditory pathway, from the cochlear nucleus in the brainstem to the cortex. Circuits involving inputs from the primary somatosensory cortex to the auditory midbrain have been identified that mediate suppressive effects of whisker stimulation on auditory thalamocortical processing, providing a possible basis for prioritizing the processing of tactile cues from nearby objects. Close links also exist between audition and movement, and auditory responses are typically suppressed by locomotion and other actions. These movement-related signals are thought to cancel out self-generated sounds, but they may also affect auditory responses via the associated somatosensory stimulation or as a result of changes in brain state. Together, these studies highlight the importance of considering both multisensory context and movement-related activity in order to understand how the auditory cortex operates during natural behaviors, paving the way for future work to investigate auditory-somatosensory interactions in more ecological situations.
Collapse
|
14
|
Macias S, Bakshi K, Troyer T, Smotherman M. The prefrontal cortex of the Mexican free-tailed bat is more selective to communication calls than primary auditory cortex. J Neurophysiol 2022; 128:634-648. [PMID: 35975923 PMCID: PMC9448334 DOI: 10.1152/jn.00436.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 07/20/2022] [Accepted: 08/05/2022] [Indexed: 11/22/2022] Open
Abstract
In this study, we examined the auditory responses of a prefrontal area, the frontal auditory field (FAF), of an echolocating bat (Tadarida brasiliensis) and presented a comparative analysis of the neuronal response properties between the FAF and the primary auditory cortex (A1). We compared single-unit responses from the A1 and the FAF elicited by pure tones, downward frequency-modulated sweeps (dFMs), and species-specific vocalizations. Unlike the A1, FAFs were not frequency tuned. However, progressive increases in dFM sweep rate elicited a systematic increase of response precision, a phenomenon that does not take place in the A1. Call selectivity was higher in the FAF versus A1. We calculated the neuronal spectrotemporal receptive fields (STRFs) and spike-triggered averages (STAs) to predict responses to the communication calls and provide an explanation for the differences in call selectivity between the FAF and A1. In the A1, we found a high correlation between predicted and evoked responses. However, we did not generate reasonable STRFs in the FAF, and the prediction based on the STAs showed lower correlation coefficient than that of the A1. This suggests nonlinear response properties in the FAF that are stronger than the linear response properties in the A1. Stimulating with a call sequence increased call selectivity in the A1, but it remained unchanged in the FAF. These data are consistent with a role for the FAF in assessing distinctive acoustic features downstream of A1, similar to the role proposed for primate ventrolateral prefrontal cortex.NEW & NOTEWORTHY In this study, we examined the neuronal responses of a frontal cortical area in an echolocating bat to behaviorally relevant acoustic stimuli and compared them with those in the primary auditory cortex (A1). In contrast to the A1, neurons in the bat frontal auditory field are not frequency tuned but showed a higher selectivity for social signals such as communication calls. The results presented here indicate that the frontal auditory field may represent an additional processing center for behaviorally relevant sounds.
Collapse
Affiliation(s)
- Silvio Macias
- Department of Biology, Texas A&M University, College Station, Texas
| | - Kushal Bakshi
- Institute for Neuroscience, Texas A&M University, College Station, Texas
| | - Todd Troyer
- Department of Neuroscience, Developmental and Regenerative Biology, University of Texas at San Antonio, San Antonio, Texas
| | - Michael Smotherman
- Department of Biology, Texas A&M University, College Station, Texas
- Institute for Neuroscience, Texas A&M University, College Station, Texas
| |
Collapse
|
15
|
Paraskevoudi N, SanMiguel I. Sensory suppression and increased neuromodulation during actions disrupt memory encoding of unpredictable self-initiated stimuli. Psychophysiology 2022; 60:e14156. [PMID: 35918912 PMCID: PMC10078310 DOI: 10.1111/psyp.14156] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 04/06/2022] [Accepted: 07/01/2022] [Indexed: 11/26/2022]
Abstract
Actions modulate sensory processing by attenuating responses to self- compared to externally generated inputs, which is traditionally attributed to stimulus-specific motor predictions. Yet, suppression has been also found for stimuli merely coinciding with actions, pointing to unspecific processes that may be driven by neuromodulatory systems. Meanwhile, the differential processing for self-generated stimuli raises the possibility of producing effects also on memory for these stimuli; however, evidence remains mixed as to the direction of the effects. Here, we assessed the effects of actions on sensory processing and memory encoding of concomitant, but unpredictable sounds, using a combination of self-generation and memory recognition task concurrently with EEG and pupil recordings. At encoding, subjects performed button presses that half of the time generated a sound (motor-auditory; MA) and listened to passively presented sounds (auditory-only; A). At retrieval, two sounds were presented and participants had to respond which one was present before. We measured memory bias and memory performance by having sequences where either both or only one of the test sounds were presented at encoding, respectively. Results showed worse memory performance - but no differences in memory bias -, attenuated responses, and larger pupil diameter for MA compared to A sounds. Critically, the larger the sensory attenuation and pupil diameter, the worse the memory performance for MA sounds. Nevertheless, sensory attenuation did not correlate with pupil dilation. Collectively, our findings suggest that sensory attenuation and neuromodulatory processes coexist during actions, and both relate to disrupted memory for concurrent, albeit unpredictable sounds.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, University of Barcelona, Barcelona, Spain
| | - Iria SanMiguel
- Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain.,Brainlab-Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain
| |
Collapse
|
16
|
Echolocation-related reversal of information flow in a cortical vocalization network. Nat Commun 2022; 13:3642. [PMID: 35752629 PMCID: PMC9233670 DOI: 10.1038/s41467-022-31230-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 05/30/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
Collapse
|
17
|
Han HJ, Powers SJ, Gabrielson KL. The Common Marmoset-Biomedical Research Animal Model Applications and Common Spontaneous Diseases. Toxicol Pathol 2022; 50:628-637. [PMID: 35535728 DOI: 10.1177/01926233221095449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Marmosets are becoming more utilized in biomedical research due to multiple advantages including (1) a nonhuman primate of a smaller size with less cost for housing, (2) physiologic similarities to humans, (3) translatable hepatic metabolism, (4) higher numbers of litters per year, (5) genome is sequenced, molecular reagents are available, (6) immunologically similar to humans, (7) transgenic marmosets with germline transmission have been produced, and (8) are naturally occurring hematopoietic chimeras. With more use of marmosets, disease surveillance over a wide range of ages of marmosets has been performed. This has led to a better understanding of the disease management of spontaneous diseases that can occur in colonies. Knowledge of clinical signs and histologic lesions can assist in maximizing the colony's health, allowing for improved outcomes in translational studies within biomedical research. Here, we describe some basic husbandry, biology, common spontaneous diseases, and animal model applications for the common marmoset in biomedical research.
Collapse
Affiliation(s)
- Hyo-Jeong Han
- Department of Molecular and Comparative Pathobiology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA.,University of Ulsan, College of Medicine, Seoul, Korea
| | - Sarah J Powers
- Department of Molecular and Comparative Pathobiology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Kathleen L Gabrielson
- Department of Molecular and Comparative Pathobiology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA.,Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
18
|
Neural correlates of impaired vocal feedback control in post-stroke aphasia. Neuroimage 2022; 250:118938. [PMID: 35092839 PMCID: PMC8920755 DOI: 10.1016/j.neuroimage.2022.118938] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 12/31/2021] [Accepted: 01/25/2022] [Indexed: 01/16/2023] Open
Abstract
We used left-hemisphere stroke as a model to examine how damage to sensorimotor brain networks impairs vocal auditory feedback processing and control. Individuals with post-stroke aphasia and matched neurotypical control subjects vocalized speech vowel sounds and listened to the playback of their self-produced vocalizations under normal (NAF) and pitch-shifted altered auditory feedback (AAF) while their brain activity was recorded using electroencephalography (EEG) signals. Event-related potentials (ERPs) were utilized as a neural index to probe the effect of vocal production on auditory feedback processing with high temporal resolution, while lesion data in the stroke group was used to determine how brain abnormality accounted for the impairment of such mechanisms. Results revealed that ERP activity was aberrantly modulated during vocalization vs. listening in aphasia, and this effect was accompanied by the reduced magnitude of compensatory vocal responses to pitch-shift alterations in the auditory feedback compared with control subjects. Lesion-mapping revealed that the aberrant pattern of ERP modulation in response to NAF was accounted for by damage to sensorimotor networks within the left-hemisphere inferior frontal, precentral, inferior parietal, and superior temporal cortices. For responses to AAF, neural deficits were predicted by damage to a distinguishable network within the inferior frontal and parietal cortices. These findings define the left-hemisphere sensorimotor networks implicated in auditory feedback processing, error detection, and vocal motor control. Our results provide translational synergy to inform the theoretical models of sensorimotor integration while having clinical applications for diagnosis and treatment of communication disabilities in individuals with stroke and other neurological conditions.
Collapse
|
19
|
Banerjee A, Vallentin D. Convergent behavioral strategies and neural computations during vocal turn-taking across diverse species. Curr Opin Neurobiol 2022; 73:102529. [DOI: 10.1016/j.conb.2022.102529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/21/2022] [Accepted: 03/02/2022] [Indexed: 01/20/2023]
|
20
|
Braga A, Schönwiesner M. Neural Substrates and Models of Omission Responses and Predictive Processes. Front Neural Circuits 2022; 16:799581. [PMID: 35177967 PMCID: PMC8844463 DOI: 10.3389/fncir.2022.799581] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/05/2022] [Indexed: 11/24/2022] Open
Abstract
Predictive coding theories argue that deviance detection phenomena, such as mismatch responses and omission responses, are generated by predictive processes with possibly overlapping neural substrates. Molecular imaging and electrophysiology studies of mismatch responses and corollary discharge in the rodent model allowed the development of mechanistic and computational models of these phenomena. These models enable translation between human and non-human animal research and help to uncover fundamental features of change-processing microcircuitry in the neocortex. This microcircuitry is characterized by stimulus-specific adaptation and feedforward inhibition of stimulus-selective populations of pyramidal neurons and interneurons, with specific contributions from different interneuron types. The overlap of the substrates of different types of responses to deviant stimuli remains to be understood. Omission responses, which are observed both in corollary discharge and mismatch response protocols in humans, are underutilized in animal research and may be pivotal in uncovering the substrates of predictive processes. Omission studies comprise a range of methods centered on the withholding of an expected stimulus. This review aims to provide an overview of omission protocols and showcase their potential to integrate and complement the different models and procedures employed to study prediction and deviance detection.This approach may reveal the biological foundations of core concepts of predictive coding, and allow an empirical test of the framework’s promise to unify theoretical models of attention and perception.
Collapse
Affiliation(s)
- Alessandro Braga
- Institute of Biology, Faculty of Life Sciences, University of Leipzig, Leipzig, Germany
- International Max Plank Research School, Max Plank Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- *Correspondence: Alessandro Braga
| | - Marc Schönwiesner
- Institute of Biology, Faculty of Life Sciences, University of Leipzig, Leipzig, Germany
- International Laboratory for Research on Brain, Music, and Sound (BRAMS), Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
21
|
Shao X, Liao Y, Gu L, Chen W, Tang J. The Etiology of Auditory Hallucinations in Schizophrenia: From Multidimensional Levels. Front Neurosci 2021; 15:755870. [PMID: 34858129 PMCID: PMC8632545 DOI: 10.3389/fnins.2021.755870] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 10/14/2021] [Indexed: 11/25/2022] Open
Abstract
Enormous efforts have been made to unveil the etiology of auditory hallucinations (AHs), and multiple genetic and neural factors have already been shown to have their own roles. Previous studies have shown that AHs in schizophrenia vary from those in other disorders, suggesting that they have unique features and possibly distinguishable mechanisms worthy of further investigation. In this review, we intend to offer a comprehensive summary of current findings related to AHs in schizophrenia from aspects of genetics and transcriptome, neurophysiology (neurometabolic and electroencephalogram studies), and neuroimaging (structural and functional magnetic resonance imaging studies and transcriptome–neuroimaging association study). Main findings include gene polymorphisms, glutamate level change, electroencephalographic alterations, and abnormalities of white matter fasciculi, cortical structure, and cerebral activities, especially in multiple regions, including auditory and language networks. More solid and comparable research is needed to replicate and integrate ongoing findings from multidimensional levels.
Collapse
Affiliation(s)
- Xu Shao
- Department of Psychiatry, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yanhui Liao
- Department of Psychiatry, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan.,Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Wei Chen
- Department of Psychiatry, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jinsong Tang
- Department of Psychiatry, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
22
|
Paraskevoudi N, SanMiguel I. Self-generation and sound intensity interactively modulate perceptual bias, but not perceptual sensitivity. Sci Rep 2021; 11:17103. [PMID: 34429453 PMCID: PMC8385100 DOI: 10.1038/s41598-021-96346-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 08/02/2021] [Indexed: 02/07/2023] Open
Abstract
The ability to distinguish self-generated stimuli from those caused by external sources is critical for all behaving organisms. Although many studies point to a sensory attenuation of self-generated stimuli, recent evidence suggests that motor actions can result in either attenuated or enhanced perceptual processing depending on the environmental context (i.e., stimulus intensity). The present study employed 2-AFC sound detection and loudness discrimination tasks to test whether sound source (self- or externally-generated) and stimulus intensity (supra- or near-threshold) interactively modulate detection ability and loudness perception. Self-generation did not affect detection and discrimination sensitivity (i.e., detection thresholds and Just Noticeable Difference, respectively). However, in the discrimination task, we observed a significant interaction between self-generation and intensity on perceptual bias (i.e. Point of Subjective Equality). Supra-threshold self-generated sounds were perceived softer than externally-generated ones, while at near-threshold intensities self-generated sounds were perceived louder than externally-generated ones. Our findings provide empirical support to recent theories on how predictions and signal intensity modulate perceptual processing, pointing to interactive effects of intensity and self-generation that seem to be driven by a biased estimate of perceived loudness, rather by changes in detection and discrimination sensitivity.
Collapse
Affiliation(s)
- Nadia Paraskevoudi
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Iria SanMiguel
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Spain. .,Institute of Neurosciences, University of Barcelona, Barcelona, Spain. .,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Spain.
| |
Collapse
|
23
|
Turk AZ, Lotfi Marchoubeh M, Fritsch I, Maguire GA, SheikhBahaei S. Dopamine, vocalization, and astrocytes. BRAIN AND LANGUAGE 2021; 219:104970. [PMID: 34098250 PMCID: PMC8260450 DOI: 10.1016/j.bandl.2021.104970] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 05/21/2021] [Accepted: 05/23/2021] [Indexed: 05/06/2023]
Abstract
Dopamine, the main catecholamine neurotransmitter in the brain, is predominately produced in the basal ganglia and released to various brain regions including the frontal cortex, midbrain and brainstem. Dopamine's effects are widespread and include modulation of a number of voluntary and innate behaviors. Vigilant regulation and modulation of dopamine levels throughout the brain is imperative for proper execution of motor behaviors, in particular speech and other types of vocalizations. While dopamine's role in motor circuitry is widely accepted, its unique function in normal and abnormal speech production is not fully understood. In this perspective, we first review the role of dopaminergic circuits in vocal production. We then discuss and propose the conceivable involvement of astrocytes, the numerous star-shaped glia cells of the brain, in the dopaminergic network modulating normal and abnormal vocal productions.
Collapse
Affiliation(s)
- Ariana Z Turk
- Neuron-Glia Signaling and Circuits Unit, National Institute of Neurological Disorders and Stroke (NINDS), National Institutes of Health (NIH), Bethesda, 20892 MD, USA
| | - Mahsa Lotfi Marchoubeh
- Department of Chemistry and Biochemistry, University of Arkansas, Fayetteville, 72701 AR, USA
| | - Ingrid Fritsch
- Department of Chemistry and Biochemistry, University of Arkansas, Fayetteville, 72701 AR, USA
| | - Gerald A Maguire
- Department of Psychiatry and Neuroscience, School of Medicine, University of California, Riverside, 92521 CA, USA
| | - Shahriar SheikhBahaei
- Neuron-Glia Signaling and Circuits Unit, National Institute of Neurological Disorders and Stroke (NINDS), National Institutes of Health (NIH), Bethesda, 20892 MD, USA.
| |
Collapse
|
24
|
Walker JD, Pirschel F, Sundiang M, Niekrasz M, MacLean JN, Hatsopoulos NG. Chronic wireless neural population recordings with common marmosets. Cell Rep 2021; 36:109379. [PMID: 34260919 PMCID: PMC8513487 DOI: 10.1016/j.celrep.2021.109379] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 05/12/2021] [Accepted: 06/16/2021] [Indexed: 12/22/2022] Open
Abstract
Marmosets are an increasingly important model system for neuroscience in part due to genetic tractability and enhanced cortical accessibility, due to a lissencephalic neocortex. However, many of the techniques generally employed to record neural activity in primates inhibit the expression of natural behaviors in marmosets precluding neurophysiological insights. To address this challenge, we have developed methods for recording neural population activity in unrestrained marmosets across multiple ethological behaviors, multiple brain states, and over multiple years. Notably, our flexible methodological design allows for replacing electrode arrays and removal of implants providing alternative experimental endpoints. We validate the method by recording sensorimotor cortical population activity in freely moving marmosets across their natural behavioral repertoire and during sleep.
Collapse
Affiliation(s)
- Jeffrey D Walker
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL 60615, USA; Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60615, USA.
| | - Friederice Pirschel
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60615, USA
| | - Marina Sundiang
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL 60615, USA
| | - Marek Niekrasz
- Department of Surgery, University of Chicago, Chicago, IL 60615, USA; The University of Chicago Neuroscience Institute, University of Chicago, Chicago, IL 60615, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL 60615, USA; Department of Neurobiology, University of Chicago, Chicago, IL 60615, USA; The University of Chicago Neuroscience Institute, University of Chicago, Chicago, IL 60615, USA
| | - Nicholas G Hatsopoulos
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL 60615, USA; Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60615, USA; The University of Chicago Neuroscience Institute, University of Chicago, Chicago, IL 60615, USA
| |
Collapse
|
25
|
Reznik D, Guttman N, Buaron B, Zion-Golumbic E, Mukamel R. Action-locked Neural Responses in Auditory Cortex to Self-generated Sounds. Cereb Cortex 2021; 31:5560-5569. [PMID: 34185837 DOI: 10.1093/cercor/bhab179] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 11/14/2022] Open
Abstract
Sensory perception is a product of interactions between the internal state of an organism and the physical attributes of a stimulus. It has been shown across the animal kingdom that perception and sensory-evoked physiological responses are modulated depending on whether or not the stimulus is the consequence of voluntary actions. These phenomena are often attributed to motor signals sent to relevant sensory regions that convey information about upcoming sensory consequences. However, the neurophysiological signature of action-locked modulations in sensory cortex, and their relationship with perception, is still unclear. In the current study, we recorded neurophysiological (using Magnetoencephalography) and behavioral responses from 16 healthy subjects performing an auditory detection task of faint tones. Tones were either generated by subjects' voluntary button presses or occurred predictably following a visual cue. By introducing a constant temporal delay between button press/cue and tone delivery, and applying source-level analysis, we decoupled action-locked and auditory-locked activity in auditory cortex. We show action-locked evoked-responses in auditory cortex following sound-triggering actions and preceding sound onset. Such evoked-responses were not found for button-presses that were not coupled with sounds, or sounds delivered following a predictive visual cue. Our results provide evidence for efferent signals in human auditory cortex that are locked to voluntary actions coupled with future auditory consequences.
Collapse
Affiliation(s)
- Daniel Reznik
- Max Planck Institute for Human Cognitive and Brain Sciences, Psychology Department, Leipzig, 04103, Germany
| | - Noa Guttman
- The Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Batel Buaron
- Sagol School of Neuroscience and School of Psychological Sciences, Tel-Aviv University, 69978, Israel
| | - Elana Zion-Golumbic
- The Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Roy Mukamel
- Sagol School of Neuroscience and School of Psychological Sciences, Tel-Aviv University, 69978, Israel
| |
Collapse
|
26
|
Li Y, Wang X, Li Z, Chen J, Qin L. Effect of locomotion on the auditory steady state response of head-fixed mice. World J Biol Psychiatry 2021; 22:362-372. [PMID: 32901530 DOI: 10.1080/15622975.2020.1814409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
OBJECTIVES Electroencephalographic (EEG) examinations of the auditory steady-state response (ASSR) can non-invasively probe cortical function to generate the gamma-band (40 Hz) oscillation, which is increasingly applied to the neurophysiological studies on the rodent models of psychiatric disorders. Though, it has been well established that the brain activities are significantly modulated by the behavioural state (such as locomotion), how the ASSR is affected remains unclear. METHODS We investigated the effect of locomotion by recording local field potential (LFP) evoked by 40-Hz click-train from multiple brain areas: auditory cortex (AC), medial geniculate body (MGB), hippocampus (HP) and prefrontal cortex (PFC), in head-fixed mice free to run on a treadmill. Comparisons were conducted on the LFPs during spontaneous movement and stationary conditions. RESULTS We found that in both the auditory (AC and MGB) and non-auditory areas (HP and PFC), locomotion reduced the initial negative deflection of LFP (early response during 0-100 ms from stimulus onset), and had no significant effect on the ASSR phase-locking to the late stimulus (100-500 ms). CONCLUSIONS Our results suggest that different neural mechanisms contribute to the early response and ASSR, and the ASSR is a more robust biomarker to investigate the pathogenesis of neuropsychiatric disorders.
Collapse
Affiliation(s)
- Yingzhuo Li
- Department of Physiology, China Medical University, Shenyang, PR China
| | - Xuejiao Wang
- Department of Physiology, China Medical University, Shenyang, PR China
| | - Zijie Li
- Department of Physiology, China Medical University, Shenyang, PR China
| | - Jingyu Chen
- Department of Physiology, China Medical University, Shenyang, PR China
| | - Ling Qin
- Department of Physiology, China Medical University, Shenyang, PR China
| |
Collapse
|
27
|
Yavorska I, Wehr M. Effects of Locomotion in Auditory Cortex Are Not Mediated by the VIP Network. Front Neural Circuits 2021; 15:618881. [PMID: 33897378 PMCID: PMC8058405 DOI: 10.3389/fncir.2021.618881] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 03/09/2021] [Indexed: 02/03/2023] Open
Abstract
Movement has a prominent impact on activity in sensory cortex, but has opposing effects on visual and auditory cortex. Both cortical areas feature a vasoactive intestinal peptide-expressing (VIP) disinhibitory circuit, which in visual cortex contributes to the effect of running. In auditory cortex, however, the role of VIP circuitry in running effects remains poorly understood. Running and optogenetic VIP activation are known to differentially modulate sound-evoked activity in auditory cortex, but it is unknown how these effects vary across cortical layers, and whether laminar differences in the roles of VIP circuitry could contribute to the substantial diversity that has been observed in the effects of both movement and VIP activation. Here we asked whether VIP neurons contribute to the effects of running, across the layers of auditory cortex. We found that both running and optogenetic activation of VIP neurons produced diverse changes in the firing rates of auditory cortical neurons, but with distinct effects on spontaneous and evoked activity and with different patterns across cortical layers. On average, running increased spontaneous firing rates but decreased evoked firing rates, resulting in a reduction of the neuronal encoding of sound. This reduction in sound encoding was observed in all cortical layers, but was most pronounced in layer 2/3. In contrast, VIP activation increased both spontaneous and evoked firing rates, and had no net population-wide effect on sound encoding, but strongly suppressed sound encoding in layer 4 narrow-spiking neurons. These results suggest that VIP activation and running act independently, which we then tested by comparing the arithmetic sum of the two effects measured separately to the actual combined effect of running and VIP activation, which were closely matched. We conclude that the effects of locomotion in auditory cortex are not mediated by the VIP network.
Collapse
Affiliation(s)
- Iryna Yavorska
- Department of Psychology, Institute of Neuroscience, University of Oregon, Eugene, OR, United States
| | - Michael Wehr
- Department of Psychology, Institute of Neuroscience, University of Oregon, Eugene, OR, United States
| |
Collapse
|
28
|
Mohn JL, Downer JD, O'Connor KN, Johnson JS, Sutter ML. Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature-selective attention. J Neurophysiol 2021; 125:1920-1937. [PMID: 33788616 DOI: 10.1152/jn.00406.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process.NEW & NOTEWORTHY We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Joshua D Downer
- Center for Neuroscience, University of California, Davis, California.,Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
29
|
Altmann CF, Yamasaki D, Song Y, Bucher B. Processing of self-initiated sound motion in the human brain. Brain Res 2021; 1762:147433. [PMID: 33737062 DOI: 10.1016/j.brainres.2021.147433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 03/10/2021] [Accepted: 03/11/2021] [Indexed: 12/01/2022]
Abstract
Interacting with objects in our environment usually leads to audible noise. Brain responses to such self-initiated sounds have been shown to be attenuated, in particular the so-called N1 component measured with electroencephalography (EEG). This attenuation has been proposed to be the effect of an internal forward model that allows for cancellation of the sensory consequences of a motor command. In the current study we asked whether the attenuation due to self-initiation of a sound also affects a later event-related potential - the so-called motion-onset response - that arises in response to moving sounds. To this end, volunteers were instructed to move their index fingers either left or rightward which resulted in virtual movement of a sound either to the left or to the right. In Experiment 1, sound motion was induced with in-ear head-phones by shifting interaural time and intensity differences and thus shifting the intracranial sound image. We compared the motion-onset responses under two conditions: a) congruent, and b) incongruent. In the congruent condition, the sound image moved in the direction of the finger movement, while in the incongruent condition sound motion was in the opposite direction of the finger movement. Clear motion-onset responses with a negative cN1 component peaking at about 160 ms and a positive cP2 component peaking at about 230 ms after motion-onset were obtained for both the congruent and incongruent conditions. However, the motion-onset responses did not significantly differ between congruent and incongruent conditions in amplitude or latency. In Experiment 2, in which sounds were presented with loudspeakers, we observed attenuation for self-induced versus externally triggered sound motion-onset, but again, there was no difference between congruent and incongruent conditions. In sum, these two experiments suggest that the motion-onset response measured by EEG can be attenuated for self-generated sounds. However, our result did not indicate that this attenuation depended on congruency of action and sound motion direction.
Collapse
Affiliation(s)
- Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan; Parkinson-Klinik Ortenau, 77709 Wolfach, Germany.
| | - Daiki Yamasaki
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan; Japan Society for the Promotion of Science, Tokyo 102-0083, Japan
| | - Yunqing Song
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan
| | - Benoit Bucher
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan
| |
Collapse
|
30
|
Ford JM, Roach BJ, Mathalon DH. Vocalizing and singing reveal complex patterns of corollary discharge function in schizophrenia. Int J Psychophysiol 2021; 164:30-40. [PMID: 33621618 DOI: 10.1016/j.ijpsycho.2021.02.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 01/30/2021] [Accepted: 02/16/2021] [Indexed: 10/22/2022]
Abstract
INTRODUCTION As we vocalize, our brains generate predictions of the sounds we produce to enable suppression of neural responses when intentions match vocalizations and to make adjustments when they do not. This may be instantiated by efference copy and corollary discharge mechanisms, which are impaired in people with schizophrenia (SZ). Although innate, these mechanisms can be affected by intentions. We asked if attending to pitch during vocalizations would take these mechanisms "off-line" and reduce suppression. METHODS Event-related potentials (ERP) were recorded from 96 SZ and 92 healthy controls (HC) as they vocalized triplets in monotone (Phrase) or sang triplets in ascending thirds (Pitch). Pre-vocalization activity (Bereitschaftspotential, BP), N1, and P2 ERP components to sounds were compared during vocalization and playback. RESULTS N1 was not as suppressed during Pitch as during Phrase. N1 suppression was not affected by SZ in either task when all data were collapsed across pitches (Pitch) and positions (Phrase). However, when binned according to vocalization performance, SZ showed less N1 suppression than HC at longer (>2 s) inter-stimulus intervals (Phrase) and inconsistent suppression across pitches (Pitch). Unlike N1, P2 was more suppressed during Pitch than Phrase and not affected by SZ. BP was greater during vocalization than playback but did not contribute to N1 or P2 effects. Pitch variability was inversely related to negative symptoms. CONCLUSIONS Neural processing is not suppressed when patients and controls sing, and corollary discharge abnormalities in schizophrenia are only seen at long vocalization intervals.
Collapse
Affiliation(s)
- Judith M Ford
- University of California, San Francisco (UCSF), United States of America; Veterans Affairs San Francisco Healthcare System, United States of America.
| | - Brian J Roach
- Veterans Affairs San Francisco Healthcare System, United States of America
| | - Daniel H Mathalon
- University of California, San Francisco (UCSF), United States of America; Veterans Affairs San Francisco Healthcare System, United States of America
| |
Collapse
|
31
|
Endo N, Ito T, Mochida T, Ijiri T, Watanabe K, Nakazawa K. Precise force controls enhance loudness discrimination of self-generated sound. Exp Brain Res 2021; 239:1141-1149. [PMID: 33555383 DOI: 10.1007/s00221-020-05993-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 11/19/2020] [Indexed: 10/22/2022]
Abstract
Motor executions alter sensory processes. Studies have shown that loudness perception changes when a sound is generated by active movement. However, it is still unknown where and how the motor-related changes in loudness perception depend on the task demand of motor execution. We examined whether different levels of precision demands in motor control affects loudness perception. We carried out a loudness discrimination test, in which the sound stimulus was produced in conjunction with the force generation task. We tested three target force amplitude levels. The force target was presented on a monitor as a fixed visual target. The generated force was also presented on the same monitor as a movement of the visual cursor. Participants adjusted their force amplitude in a predetermined range without overshooting using these visual targets and moving cursor. In the control condition, the sound and visual stimuli were generated externally (without a force generation task). We found that the discrimination performance was significantly improved when the sound was produced by the force generation task compared to the control condition, in which the sound was produced externally, although we did not find that this improvement in discrimination performance changed depending on the different target force amplitude levels. The results suggest that the demand for precise control to produce a fixed amount of force may be key to obtaining the facilitatory effect of motor execution in auditory processes.
Collapse
Affiliation(s)
- Nozomi Endo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan.,Faculty of Science and Engineering, Waseda University, 3-4-1, Ohkubo, Shinjuku-ku, Tokyo, 169-8555, Japan.,Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan
| | - Takayuki Ito
- Univ. Grenoble Alps, Grenoble-INP, CNRS, GIPSA-Lab, 11 rue des Mathématiques, Grenoble Campus BP46, 38402, Saint Martin D'heres Cedex, France.,Haskins Laboratories, 300 George Street, New Haven, CT, 06511, USA
| | - Takemi Mochida
- NTT Communication Science Laboratories, 3-1, Morinosato Wakamiya, Atsugi-shi, Kanagawa, 243-0198, Japan
| | - Tetsuya Ijiri
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, 3-4-1, Ohkubo, Shinjuku-ku, Tokyo, 169-8555, Japan.,Art & Design, University of New South Wales, Oxford St & Greens Rd, Paddington, NSW 202, Australia
| | - Kimitaka Nakazawa
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan.
| |
Collapse
|
32
|
Asilador A, Llano DA. Top-Down Inference in the Auditory System: Potential Roles for Corticofugal Projections. Front Neural Circuits 2021; 14:615259. [PMID: 33551756 PMCID: PMC7862336 DOI: 10.3389/fncir.2020.615259] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/17/2020] [Indexed: 01/28/2023] Open
Abstract
It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.
Collapse
Affiliation(s)
- Alexander Asilador
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
| | - Daniel A. Llano
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
- Molecular and Integrative Physiology, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|
33
|
Shamma S, Patel P, Mukherjee S, Marion G, Khalighinejad B, Han C, Herrero J, Bickel S, Mehta A, Mesgarani N. Learning Speech Production and Perception through Sensorimotor Interactions. Cereb Cortex Commun 2020; 2:tgaa091. [PMID: 33506209 PMCID: PMC7811190 DOI: 10.1093/texcom/tgaa091] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 11/19/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022] Open
Abstract
Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.
Collapse
Affiliation(s)
- Shihab Shamma
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.,Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Prachi Patel
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Cong Han
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jose Herrero
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh Mehta
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA.,The Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| |
Collapse
|
34
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Banks MI, Krause BM, Kawasaki H, Howard MA. Electrophysiology of the Human Superior Temporal Sulcus during Speech Processing. Cereb Cortex 2020; 31:1131-1148. [PMID: 33063098 DOI: 10.1093/cercor/bhaa281] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 08/06/2020] [Accepted: 09/01/2020] [Indexed: 12/20/2022] Open
Abstract
The superior temporal sulcus (STS) is a crucial hub for speech perception and can be studied with high spatiotemporal resolution using electrodes targeting mesial temporal structures in epilepsy patients. Goals of the current study were to clarify functional distinctions between the upper (STSU) and the lower (STSL) bank, hemispheric asymmetries, and activity during self-initiated speech. Electrophysiologic properties were characterized using semantic categorization and dialog-based tasks. Gamma-band activity and alpha-band suppression were used as complementary measures of STS activation. Gamma responses to auditory stimuli were weaker in STSL compared with STSU and had longer onset latencies. Activity in anterior STS was larger during speaking than listening; the opposite pattern was observed more posteriorly. Opposite hemispheric asymmetries were found for alpha suppression in STSU and STSL. Alpha suppression in the STS emerged earlier than in core auditory cortex, suggesting feedback signaling within the auditory cortical hierarchy. STSL was the only region where gamma responses to words presented in the semantic categorization tasks were larger in subjects with superior task performance. More pronounced alpha suppression was associated with better task performance in Heschl's gyrus, superior temporal gyrus, and STS. Functional differences between STSU and STSL warrant their separate assessment in future studies.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | | | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA.,Department of Neuroscience, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA.,Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
35
|
O'Connell MN, Barczak A, McGinnis T, Mackin K, Mowery T, Schroeder CE, Lakatos P. The Role of Motor and Environmental Visual Rhythms in Structuring Auditory Cortical Excitability. iScience 2020; 23:101374. [PMID: 32738615 PMCID: PMC7394914 DOI: 10.1016/j.isci.2020.101374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/14/2020] [Accepted: 07/13/2020] [Indexed: 10/26/2022] Open
Abstract
Previous studies indicate that motor sampling patterns modulate neuronal excitability in sensory brain regions by entraining brain rhythms, a process termed motor-initiated entrainment. In addition, rhythms of the external environment are also capable of entraining brain rhythms. Our first goal was to investigate the properties of motor-initiated entrainment in the auditory system using a prominent visual motor sampling pattern in primates, saccades. Second, we wanted to determine whether/how motor-initiated entrainment interacts with visual environmental entrainment. We examined laminar profiles of neuronal ensemble activity in primary auditory cortex and found that whereas motor-initiated entrainment has a suppressive effect, visual environmental entrainment has an enhancive effect. We also found that these processes are temporally coupled, and their temporal relationship ensures that their effect on excitability is complementary rather than interfering. Altogether, our results demonstrate that motor and sensory systems continuously interact in orchestrating the brain's context for the optimal sampling of our multisensory environment.
Collapse
Affiliation(s)
- Monica N O'Connell
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA.
| | - Annamaria Barczak
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - Tammy McGinnis
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - Kieran Mackin
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - Todd Mowery
- Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA
| | - Charles E Schroeder
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Departments of Neurological Surgery and Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA
| | - Peter Lakatos
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Department of Psychiatry, New York University School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
36
|
Roach BJ, Ford JM, Loewy RL, Stuart BK, Mathalon DH. Theta Phase Synchrony Is Sensitive to Corollary Discharge Abnormalities in Early Illness Schizophrenia but Not in the Psychosis Risk Syndrome. Schizophr Bull 2020; 47:415-423. [PMID: 32793958 PMCID: PMC7965080 DOI: 10.1093/schbul/sbaa110] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
BACKGROUND Prior studies have shown that the auditory N1 event-related potential component elicited by self-generated vocalizations is reduced relative to played back vocalizations, putatively reflecting a corollary discharge mechanism. Schizophrenia patients and psychosis risk syndrome (PRS) youth show deficient N1 suppression during vocalization, consistent with corollary discharge dysfunction. Because N1 is an admixture of theta (4-7 Hz) power and phase synchrony, we examined their contributions to N1 suppression during vocalization, as well as their sensitivity, relative to N1, to corollary discharge dysfunction in schizophrenia and PRS individuals. METHODS Theta phase and power values were extracted from electroencephalography data acquired from PRS youth (n = 71), early illness schizophrenia patients (ESZ; n = 84), and healthy controls (HCs; n = 103) as they said "ah" (Talk) and then listened to the playback of their vocalizations (Listen). A principal component analysis extracted theta intertrial coherence (ITC; phase consistency) and event-related spectral power, peaking in the N1 latency range. Talk-Listen suppression scores were analyzed. RESULTS Talk-Listen suppression was greater for theta ITC (Cohen's d = 1.46) than for N1 in HC (d = 0.63). Both were deficient in ESZ, but only N1 suppression was deficient in PRS. When deprived of variance shared with theta ITC suppression, N1 suppression no longer differentiated ESZ and PRS individuals from HC. Deficits in theta ITC suppression were correlated with delusions (P = .007) in ESZ. Theta power suppression did not differentiate groups. CONCLUSIONS Theta ITC-suppression during vocalization is a more sensitive index of corollary discharge-mediated auditory cortical suppression than N1 suppression and is more sensitive to corollary discharge dysfunction in ESZ than in PRS individuals.
Collapse
Affiliation(s)
- Brian J Roach
- Psychiatry Service, San Francisco VA Medical Center, San Francisco, CA
| | - Judith M Ford
- Psychiatry Service, San Francisco VA Medical Center, San Francisco, CA,Department of Psychiatry, University of California, San Francisco, CA,To whom correspondence should be addressed; tel: 415 221-4810 x24187, fax: 415-750-6622, e-mail:
| | - Rachel L Loewy
- Department of Psychiatry, University of California, San Francisco, CA
| | - Barbara K Stuart
- Department of Psychiatry, University of California, San Francisco, CA
| | - Daniel H Mathalon
- Psychiatry Service, San Francisco VA Medical Center, San Francisco, CA,Department of Psychiatry, University of California, San Francisco, CA
| |
Collapse
|
37
|
Abstract
Rhythms are a fundamental and defining feature of neuronal activity in animals including humans. This rhythmic brain activity interacts in complex ways with rhythms in the internal and external environment through the phenomenon of 'neuronal entrainment', which is attracting increasing attention due to its suggested role in a multitude of sensory and cognitive processes. Some senses, such as touch and vision, sample the environment rhythmically, while others, like audition, are faced with mostly rhythmic inputs. Entrainment couples rhythmic brain activity to external and internal rhythmic events, serving fine-grained routing and modulation of external and internal signals across multiple spatial and temporal hierarchies. This interaction between a brain and its environment can be experimentally investigated and even modified by rhythmic sensory stimuli or invasive and non-invasive neuromodulation techniques. We provide a comprehensive overview of the topic and propose a theoretical framework of how neuronal entrainment dynamically structures information from incoming neuronal, bodily and environmental sources. We discuss the different types of neuronal entrainment, the conceptual advances in the field, and converging evidence for general principles.
Collapse
Affiliation(s)
- Peter Lakatos
- Translational Neuroscience Laboratories, Nathan Kline Institute, Old Orangeburg Road 140, Orangeburg, New York 10962, USA; Department of Psychiatry, New York University School of Medicine, One, 8, Park Ave, New York, NY 10016, USA.
| | - Joachim Gross
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Malmedyweg 15, 48149 Muenster, Germany; Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, UK.
| | - Gregor Thut
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, UK.
| |
Collapse
|
38
|
Ford JM, Mathalon DH. Efference Copy, Corollary Discharge, Predictive Coding, and Psychosis. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2020; 4:764-767. [PMID: 31495399 DOI: 10.1016/j.bpsc.2019.07.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 07/16/2019] [Indexed: 01/05/2023]
Affiliation(s)
- Judith M Ford
- Veterans Affairs San Francisco Healthcare System and the University of California, San Francisco, San Francisco, California.
| | - Daniel H Mathalon
- Veterans Affairs San Francisco Healthcare System and the University of California, San Francisco, San Francisco, California
| |
Collapse
|
39
|
Li S, Zhu H, Tian X. Corollary Discharge Versus Efference Copy: Distinct Neural Signals in Speech Preparation Differentially Modulate Auditory Responses. Cereb Cortex 2020; 30:5806-5820. [PMID: 32542347 DOI: 10.1093/cercor/bhaa154] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Revised: 05/15/2020] [Accepted: 05/16/2020] [Indexed: 11/14/2022] Open
Abstract
Actions influence sensory processing in a complex way to shape behavior. For example, during actions, a copy of motor signals-termed "corollary discharge" (CD) or "efference copy" (EC)-can be transmitted to sensory regions and modulate perception. However, the sole inhibitory function of the motor copies is challenged by mixed empirical observations as well as multifaceted computational demands for behaviors. We hypothesized that the content in the motor signals available at distinct stages of actions determined the nature of signals (CD vs. EC) and constrained their modulatory functions on perceptual processing. We tested this hypothesis using speech in which we could precisely control and quantify the course of action. In three electroencephalography (EEG) experiments using a novel delayed articulation paradigm, we found that preparation without linguistic contents suppressed auditory responses to all speech sounds, whereas preparing to speak a syllable selectively enhanced the auditory responses to the prepared syllable. A computational model demonstrated that a bifurcation of motor signals could be a potential algorithm and neural implementation to achieve the distinct functions in the motor-to-sensory transformation. These results suggest that distinct motor signals are generated in the motor-to-sensory transformation and integrated with sensory input to modulate perception.
Collapse
Affiliation(s)
- Siqi Li
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.,NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
| | - Hao Zhu
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China.,Division of Arts and Sciences, New York University Shanghai, Shanghai 200122, China
| | - Xing Tian
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China.,Division of Arts and Sciences, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
40
|
Knyazeva S, Selezneva E, Gorkin A, Ohl FW, Brosch M. Representation of Auditory Task Components and of Their Relationships in Primate Auditory Cortex. Front Neurosci 2020; 14:306. [PMID: 32372903 PMCID: PMC7186436 DOI: 10.3389/fnins.2020.00306] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 03/16/2020] [Indexed: 11/13/2022] Open
Abstract
The current study aimed to resolve some of the inconsistencies in the literature on which mental processes affect auditory cortical activity. To this end, we studied auditory cortical firing in four monkeys with different experience while they were involved in six conditions with different arrangements of the task components sound, motor action, and water reward. Firing rates changed most strongly when a sound-only condition was compared to a condition in which sound was paired with water. Additional smaller changes occurred in more complex conditions in which the monkeys received water for motor actions before or after sounds. Our findings suggest that auditory cortex is most strongly modulated by the subjects’ level of arousal, thus by a psychological concept related to motor activity triggered by reinforcers and to readiness for operant behavior. Our findings also suggest that auditory cortex is involved in associative and emotional functions, but not in agency and cognitive effort.
Collapse
Affiliation(s)
| | | | - Alexander Gorkin
- Institute of Psychology, Russian Academy of Sciences, Moscow, Russia
| | - Frank W Ohl
- Leibniz Institut für Neurobiologie, Magdeburg, Germany.,Institute of Biology, Otto-von-Guericke University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
| | - Michael Brosch
- Leibniz Institut für Neurobiologie, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
| |
Collapse
|
41
|
Dissociation of Unit Activity and Gamma Oscillations during Vocalization in Primate Auditory Cortex. J Neurosci 2020; 40:4158-4171. [PMID: 32295815 DOI: 10.1523/jneurosci.2749-19.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 02/10/2020] [Accepted: 02/26/2020] [Indexed: 11/21/2022] Open
Abstract
Vocal production is a sensory-motor process in which auditory self-monitoring is used to ensure accurate communication. During vocal production, the auditory cortex of both humans and animals is suppressed, a phenomenon that plays an important role in self-monitoring and vocal motor control. However, the underlying neural mechanisms of this vocalization-induced suppression are unknown. γ-band oscillations (>25 Hz) have been implicated a variety of cortical functions and are thought to arise from activity of local inhibitory interneurons, but have not been studied during vocal production. We therefore examined γ-band activity in the auditory cortex of vocalizing marmoset monkeys, of either sex, and found that γ responses increased during vocal production. This increase in γ contrasts with simultaneously recorded suppression of single-unit and multiunit responses. Recorded vocal γ oscillations exhibited two separable components: a vocalization-specific nonsynchronized ("induced") response correlating with vocal suppression, and a synchronized ("evoked") response that was also present during passive sound playback. These results provide evidence for the role of cortical γ oscillations during inhibitory processing. Furthermore, the two distinct components of the γ response suggest possible mechanisms for vocalization-induced suppression, and may correspond to the sensory-motor integration of top-down and bottom-up inputs to the auditory cortex during vocal production.SIGNIFICANCE STATEMENT Vocal communication is important to both humans and animals. In order to ensure accurate information transmission, we must monitor our own vocal output. Surprisingly, spiking activity in the auditory cortex is suppressed during vocal production yet maintains sensitivity to the sound of our own voice ("feedback"). The mechanisms of this vocalization-induced suppression are unknown. Here we show that auditory cortical γ oscillations, which reflect interneuron activity, are actually increased during vocal production, the opposite response of that seen in spiking units. We discuss these results with proposed functions of γ activity during inhibitory sensory processing and coordination of different brain regions, suggesting a role in sensory-motor integration.
Collapse
|
42
|
Walker JD, Pirschel F, Gidmark N, MacLean JN, Hatsopoulos NG. A platform for semiautomated voluntary training of common marmosets for behavioral neuroscience. J Neurophysiol 2020; 123:1420-1426. [PMID: 32130092 PMCID: PMC7191516 DOI: 10.1152/jn.00300.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 02/28/2020] [Accepted: 02/28/2020] [Indexed: 01/31/2023] Open
Abstract
Generally behavioral neuroscience studies of the common marmoset employ adaptations of well-established training methods used with macaque monkeys. However, in many cases these approaches do not readily generalize to marmosets indicating a need for alternatives. Here we present the development of one such alternate: a platform for semiautomated, voluntary in-home cage behavioral training that allows for the study of naturalistic behaviors. We describe the design and production of a modular behavioral training apparatus using CAD software and digital fabrication. We demonstrate that this apparatus permits voluntary behavioral training and data collection throughout the marmoset's waking hours with little experimenter intervention. Furthermore, we demonstrate the use of this apparatus to reconstruct the kinematics of the marmoset's upper limb movement during natural foraging behavior.NEW & NOTEWORTHY The study of marmosets in neuroscience has grown rapidly and presents unique challenges. We address those challenges with an innovative platform for semiautomated, voluntary training that allows marmosets to train throughout their waking hours with minimal experimenter intervention. We describe the use of this platform to capture upper limb kinematics during foraging and to expand the opportunities for behavioral training beyond the limits of traditional training sessions. This flexible platform can easily incorporate other tasks.
Collapse
Affiliation(s)
- Jeffrey D Walker
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
| | - Friederice Pirschel
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
| | | | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
- Department of Neurobiology, University of Chicago, Chicago, Illinois
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, University of Chicago, Chicago, Illinois
| | - Nicholas G Hatsopoulos
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, University of Chicago, Chicago, Illinois
| |
Collapse
|
43
|
Sares AG, Deroche MLD, Ohashi H, Shiller DM, Gracco VL. Neural Correlates of Vocal Pitch Compensation in Individuals Who Stutter. Front Hum Neurosci 2020; 14:18. [PMID: 32161525 PMCID: PMC7053555 DOI: 10.3389/fnhum.2020.00018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 01/17/2020] [Indexed: 02/06/2023] Open
Abstract
Stuttering is a disorder that impacts the smooth flow of speech production and is associated with a deficit in sensorimotor integration. In a previous experiment, individuals who stutter were able to vocally compensate for pitch shifts in their auditory feedback, but they exhibited more variability in the timing of their corrective responses. In the current study, we focused on the neural correlates of the task using functional MRI. Participants produced a vowel sound in the scanner while hearing their own voice in real time through headphones. On some trials, the audio was shifted up or down in pitch, eliciting a corrective vocal response. Contrasting pitch-shifted vs. unshifted trials revealed bilateral superior temporal activation over all the participants. However, the groups differed in the activation of middle temporal gyrus and superior frontal gyrus [Brodmann area 10 (BA 10)], with individuals who stutter displaying deactivation while controls displayed activation. In addition to the standard univariate general linear modeling approach, we employed a data-driven technique (independent component analysis, or ICA) to separate task activity into functional networks. Among the networks most correlated with the experimental time course, there was a combined auditory-motor network in controls, but the two networks remained separable for individuals who stuttered. The decoupling of these networks may account for temporal variability in pitch compensation reported in our previous work, and supports the idea that neural network coherence is disturbed in the stuttering brain.
Collapse
Affiliation(s)
- Anastasia G Sares
- Speech Motor Control Lab, Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Centre for Research on Brain, Language, and Music, Montreal, QC, Canada
| | - Mickael L D Deroche
- Centre for Research on Brain, Language, and Music, Montreal, QC, Canada.,Laboratory for Hearing and Cognition, Department of Psychology, Concordia University, Montreal, QC, Canada
| | | | - Douglas M Shiller
- Centre for Research on Brain, Language, and Music, Montreal, QC, Canada.,École d'orthophonie et d'audiologie, Université de Montréal, Montreal, QC, Canada
| | - Vincent L Gracco
- Speech Motor Control Lab, Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Centre for Research on Brain, Language, and Music, Montreal, QC, Canada.,Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
44
|
Myers JC, Mock JR, Golob EJ. Sensorimotor Integration Can Enhance Auditory Perception. Sci Rep 2020; 10:1496. [PMID: 32001755 PMCID: PMC6992622 DOI: 10.1038/s41598-020-58447-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 01/08/2020] [Indexed: 11/26/2022] Open
Abstract
Whenever we move, speak, or play musical instruments, our actions generate auditory sensory input. The sensory consequences of our actions are thought to be predicted via sensorimotor integration, which involves anatomical and functional links between auditory and motor brain regions. The physiological connections are relatively well established, but less is known about how sensorimotor integration affects auditory perception. The sensory attenuation hypothesis suggests that the perceived loudness of self-generated sounds is attenuated to help distinguish self-generated sounds from ambient sounds. Sensory attenuation would work for louder ambient sounds, but could lead to less accurate perception if the ambient sounds were quieter. We hypothesize that a key function of sensorimotor integration is the facilitated processing of self-generated sounds, leading to more accurate perception under most conditions. The sensory attenuation hypothesis predicts better performance for higher but not lower intensity comparisons, whereas sensory facilitation predicts improved perception regardless of comparison sound intensity. A series of experiments tested these hypotheses, with results supporting the enhancement hypothesis. Overall, people were more accurate at comparing the loudness of two sounds when making one of the sounds themselves. We propose that the brain selectively modulates the perception of self-generated sounds to enhance representations of action consequences.
Collapse
Affiliation(s)
- John C Myers
- Department of Psychology, University of Texas, San Antonio, USA.
| | - Jeffrey R Mock
- Department of Psychology, University of Texas, San Antonio, USA
| | - Edward J Golob
- Department of Psychology, University of Texas, San Antonio, USA
| |
Collapse
|
45
|
Schmitt LM, Wang J, Pedapati EV, Thurman AJ, Abbeduto L, Erickson CA, Sweeney JA. A neurophysiological model of speech production deficits in fragile X syndrome. Brain Commun 2019; 2. [PMID: 32924010 PMCID: PMC7425415 DOI: 10.1093/braincomms/fcz042] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Fragile X syndrome is the most common inherited intellectual disability and monogenic cause of autism spectrum disorder. Expressive language deficits, especially in speech production, are nearly ubiquitous among individuals with fragile X, but understanding of the neurological bases for these deficits remains limited. Speech production depends on feedforward control and the synchronization of neural oscillations between speech-related areas of frontal cortex and auditory areas of temporal cortex. Interaction in this circuitry allows the corollary discharge of intended speech generated from an efference copy of speech commands to be compared against actual speech sounds, which is critical for making adaptive adjustments to optimize future speech. We aimed to determine whether alterations in coherence between frontal and temporal cortices prior to speech production are present in individuals with fragile X and whether they relate to expressive language dysfunction. Twenty-one participants with full-mutation fragile X syndrome (aged 7-55 years, eight females) and 20 healthy controls (matched on age and sex) completed a talk/listen paradigm during high-density EEG recordings. During the talk task, participants repeated pronounced short vocalizations of 'Ah' every 1-2 s for a total of 180 s. During the listen task, participants passively listened to their recordings from the talk task. We compared pre-speech event-related potential activity, N1 suppression to speech sounds, single trial gamma power and fronto-temporal coherence between groups during these tasks and examined their relation to performance during a naturalistic language task. Prior to speech production, fragile X participants showed reduced pre-speech negativity, reduced fronto-temporal connectivity and greater frontal gamma power compared to controls. N1 suppression during self-generated speech did not differ between groups. Reduced pre-speech activity and increased frontal gamma power prior to speech production were related to less intelligible speech as well as broader social communication deficits in fragile X syndrome. Our findings indicate that coordinated pre-speech activity between frontal and temporal cortices is disrupted in individuals with fragile X in a clinically relevant way and represents a mechanism contributing to prominent speech production problems in the disorder.
Collapse
Affiliation(s)
- Lauren M Schmitt
- Division of Developmental and Behavioral Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA.,Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jun Wang
- Department of Psychology, Zhejiang Normal University, Jinhua, Zhejiang 321004, China
| | - Ernest V Pedapati
- Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati College of Medicine, Cincinnati, OH, USA.,Department of Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Angela John Thurman
- Psychiatry and Behavioral Sciences, University of California, Davis, MIND Institute, Sacramento, CA, USA
| | - Leonard Abbeduto
- Psychiatry and Behavioral Sciences, University of California, Davis, MIND Institute, Sacramento, CA, USA
| | - Craig A Erickson
- Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati College of Medicine, Cincinnati, OH, USA.,Department of Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - John A Sweeney
- Department of Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| |
Collapse
|
46
|
Knolle F, Schwartze M, Schröger E, Kotz SA. Auditory Predictions and Prediction Errors in Response to Self-Initiated Vowels. Front Neurosci 2019; 13:1146. [PMID: 31708737 PMCID: PMC6823252 DOI: 10.3389/fnins.2019.01146] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 10/10/2019] [Indexed: 11/13/2022] Open
Abstract
It has been suggested that speech production is accomplished by an internal forward model, reducing processing activity directed to self-produced speech in the auditory cortex. The current study uses an established N1-suppression paradigm comparing self- and externally initiated natural speech sounds to answer two questions: (1) Are forward predictions generated to process complex speech sounds, such as vowels, initiated via a button press? (2) Are prediction errors regarding self-initiated deviant vowels reflected in the corresponding ERP components? Results confirm an N1-suppression in response to self-initiated speech sounds. Furthermore, our results suggest that predictions leading to the N1-suppression effect are specific, as self-initiated deviant vowels do not elicit an N1-suppression effect. Rather, self-initiated deviant vowels elicit an enhanced N2b and P3a compared to externally generated deviants, externally generated standard, or self-initiated standards, again confirming prediction specificity. Results show that prediction errors are salient in self-initiated auditory speech sounds, which may lead to more efficient error correction in speech production.
Collapse
Affiliation(s)
- Franziska Knolle
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom.,Department of Neuroradiology, Technical University of Munich, Munich, Germany
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, Netherlands
| | - Erich Schröger
- Institute of Psychology, Leipzig University, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, Netherlands.,Department of Neuropsychology, Max Planck Institute of Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
47
|
Ravignani A, Verga L, Greenfield MD. Interactive rhythms across species: the evolutionary biology of animal chorusing and turn-taking. Ann N Y Acad Sci 2019; 1453:12-21. [PMID: 31515817 PMCID: PMC6790674 DOI: 10.1111/nyas.14230] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 08/13/2019] [Accepted: 08/14/2019] [Indexed: 12/11/2022]
Abstract
The study of human language is progressively moving toward comparative and interactive frameworks, extending the concept of turn-taking to animal communication. While such an endeavor will help us understand the interactive origins of language, any theoretical account for cross-species turn-taking should consider three key points. First, animal turn-taking must incorporate biological studies on animal chorusing, namely how different species coordinate their signals over time. Second, while concepts employed in human communication and turn-taking, such as intentionality, are still debated in animal behavior, lower level mechanisms with clear neurobiological bases can explain much of animal interactive behavior. Third, social behavior, interactivity, and cooperation can be orthogonal, and the alternation of animal signals need not be cooperative. Considering turn-taking a subset of chorusing in the rhythmic dimension may avoid overinterpretation and enhance the comparability of future empirical work.
Collapse
Affiliation(s)
- Andrea Ravignani
- Artificial Intelligence LabVrije Universiteit BrusselBrusselsBelgium
- Institute for Advanced StudyUniversity of AmsterdamAmsterdamthe Netherlands
- Research DepartmentSealcentre PieterburenPieterburenthe Netherlands
| | - Laura Verga
- Faculty of Psychology and Neuroscience, Department NP&PPMaastricht UniversityMaastrichtthe Netherlands
| | - Michael D. Greenfield
- Department of Ecology and Evolutionary BiologyUniversity of KansasLawrenceKansas
- Equipe Neuro‐Ethologie Sensorielle, ENES/Neuro‐PSI, CNRS UMR 9197Université de Lyon/Saint‐EtienneSaint EtienneFrance
| |
Collapse
|
48
|
Movement and VIP Interneuron Activation Differentially Modulate Encoding in Mouse Auditory Cortex. eNeuro 2019; 6:ENEURO.0164-19.2019. [PMID: 31481397 PMCID: PMC6751373 DOI: 10.1523/eneuro.0164-19.2019] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 08/02/2019] [Accepted: 08/14/2019] [Indexed: 11/22/2022] Open
Abstract
Information processing in sensory cortex is highly sensitive to nonsensory variables such as anesthetic state, arousal, and task engagement. Recent work in mouse visual cortex suggests that evoked firing rates, stimulus–response mutual information, and encoding efficiency increase when animals are engaged in movement. A disinhibitory circuit appears central to this change: inhibitory neurons expressing vasoactive intestinal peptide (VIP) are activated during movement and disinhibit pyramidal cells by suppressing other inhibitory interneurons. Paradoxically, although movement activates a similar disinhibitory circuit in auditory cortex (ACtx), most ACtx studies report reduced spiking during movement. It is unclear whether the resulting changes in spike rates result in corresponding changes in stimulus–response mutual information. We examined ACtx responses evoked by tone cloud stimuli, in awake mice of both sexes, during spontaneous movement and still conditions. VIP+ cells were optogenetically activated on half of trials, permitting independent analysis of the consequences of movement and VIP activation, as well as their intersection. Movement decreased stimulus-related spike rates as well as mutual information and encoding efficiency. VIP interneuron activation tended to increase stimulus-evoked spike rates but not stimulus–response mutual information, thus reducing encoding efficiency. The intersection of movement and VIP activation was largely consistent with a linear combination of these main effects: VIP activation recovered movement-induced reduction in spike rates, but not information transfer.
Collapse
|
49
|
Eliades SJ, Wang X. Corollary Discharge Mechanisms During Vocal Production in Marmoset Monkeys. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2019; 4:805-812. [PMID: 31420219 PMCID: PMC6733626 DOI: 10.1016/j.bpsc.2019.06.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 06/24/2019] [Accepted: 06/24/2019] [Indexed: 01/11/2023]
Abstract
Interactions between motor systems and sensory processing are ubiquitous throughout the animal kingdom and play an important role in many sensorimotor behaviors, including both human speech and animal vocalization. During vocal production, the auditory system plays important roles in both encoding feedback of produced sounds, allowing one to self-monitor for vocal errors, and simultaneously maintaining sensitivity to the outside acoustic environment. Supporting these roles is an efferent motor-to-sensory signal known as a corollary discharge. This review summarizes recent work on the role of such signaling during vocalization in the marmoset monkey, a nonhuman primate model of social vocal communication.
Collapse
Affiliation(s)
- Steven J. Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, U.S.A
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, U.S.A
| |
Collapse
|
50
|
Naunheim ML, Yung KC, Schneider SL, Henderson-Sabes J, Kothare H, Hinkley LB, Mizuiri D, Klein DJ, Houde JF, Nagarajan SS, Cheung SW. Cortical networks for speech motor control in unilateral vocal fold paralysis. Laryngoscope 2019; 129:2125-2130. [PMID: 30570142 DOI: 10.1002/lary.27730] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 09/09/2018] [Accepted: 11/07/2018] [Indexed: 12/13/2022]
Abstract
OBJECTIVE To evaluate brain networks for motor control of voice production in patients with treated unilateral vocal fold paralysis (UVFP). STUDY DESIGN Cross-sectional comparison. METHODS Nine UVFP patients treated by type I thyroplasty, and 11 control subjects were compared using magnetoencephalographic imaging to measure beta band (12-30 Hz) neural oscillations during voice production with perturbation of pitch feedback. Differences in beta band power relative to baseline were analyzed to identify cortical areas with abnormal activity within the 400 ms perturbation period and 125 ms beyond, for a total of 525 ms. RESULTS Whole-brain task-induced beta band activation patterns were qualitatively similar in both treated UVFP patients and healthy controls. Central vocal motor control plasticity in UVFP was expressed within constitutive components of central human communication networks identified in healthy controls. Treated UVFP patients exhibited statistically significant enhancement (P < 0.05) in beta band activity following pitch perturbation onset in left auditory cortex to 525 ms, left premotor cortex to 225 ms, and left and right frontal cortex to 525 ms. CONCLUSION This study further corroborates that a peripheral motor impairment of the larynx can affect central cortical networks engaged in auditory feedback processing, vocal motor control, and judgment of voice-as-self. Future research to dissect functional relationships among constitutive cortical networks could reveal neurophysiological bases of central contributions to voice production impairment in UVFP. Those novel insights would motivate innovative treatments to improve voice production and reduce misalignment of voice-quality judgment between clinicians and patients. LEVEL OF EVIDENCE 3b Laryngoscope, 129:2125-2130, 2019.
Collapse
Affiliation(s)
- Molly L Naunheim
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| | - Katherine C Yung
- San Francisco Voice & Swallowing, University of California, San Francisco, California, U.S.A
| | - Sarah L Schneider
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| | - Jennifer Henderson-Sabes
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| | - Hardik Kothare
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California, U.S.A
| | - Leighton B Hinkley
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California, U.S.A
| | - Danielle Mizuiri
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California, U.S.A
| | - David J Klein
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| | - John F Houde
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| | - Srikantan S Nagarajan
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California, U.S.A
| | - Steven W Cheung
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| |
Collapse
|