1
|
Whitford TJ, Spencer KM, Godwin M, Hirano Y, Chung LKH, Vodovozov W, Griffiths O, Harris AWF, Le Pelley ME, Jack BN. Gamma and Theta/Alpha-Band Oscillations in the Electroencephalogram Distinguish the Content of Inner Speech. eNeuro 2025; 12:ENEURO.0297-24.2025. [PMID: 39843220 PMCID: PMC11810546 DOI: 10.1523/eneuro.0297-24.2025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Revised: 01/06/2025] [Accepted: 01/07/2025] [Indexed: 01/24/2025] Open
Abstract
Inner speech refers to the silent production of language in one's mind. As a purely mental action without obvious physical manifestations, inner speech has been notoriously difficult to quantify. To address this issue, the present study repurposed the phenomenon of speaking-induced suppression, wherein overt speech has been consistently shown to elicit reduced auditory evoked potentials compared with externally generated speech, as well as changes in oscillatory activity in gamma and theta frequency bands. Given the functional similarities between inner and overt speech, we used an established experimental protocol to investigate whether similar metrics could be used to distinguish the content of inner speech. Healthy participants (n = 129) produced an inner syllable at a precisely specified time. An audible syllable was concurrently presented which either matched or mismatched the content of the inner syllable. The results revealed that Match and Mismatch conditions could be differentiated on the basis of their evoked oscillations in the gamma, theta, and alpha bands. Notably, there was a gamma-band oscillation in the vicinity of the P2 that differed between the Match and Mismatch conditions, suggesting that "late" gamma-band activity may index consciously perceived expectancy violations, or cognitive prediction errors. Regarding the auditory evoked potentials, the N1 component was suppressed in the Match condition while the P2 component was suppressed in the Mismatch condition, replicating previous findings. This study provides support for the existence of "inner speaking-induced suppression", and demonstrates that inner syllables can be differentiated based on their influence on the electroencephalographic activity elicited by simultaneously-presented audible syllables.
Collapse
Affiliation(s)
- Thomas J Whitford
- School of Psychology, University of New South Wales (UNSW Sydney), Sydney, New South Wales 2052, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, Sydney, New South Wales 2145, Australia
| | - Kevin M Spencer
- Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, Massachusetts 02130
| | - Marianthe Godwin
- School of Psychology, University of New South Wales (UNSW Sydney), Sydney, New South Wales 2052, Australia
| | - Yoji Hirano
- Department of Psychiatry, Division of Clinical Neuroscience, Faculty of Medicine, University of Miyazaki, Miyazaki 889-2192, Japan
| | - Lawrence Kin-Hei Chung
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong 999077, Hong Kong SAR, China
| | - Wadim Vodovozov
- Department of Psychiatry, Zucker Hillside Hospital, Glen Oaks, New York 11004
| | - Oren Griffiths
- School of Psychology, University of Newcastle, Newcastle, New South Wales 2308, Australia
| | - Anthony W F Harris
- Brain Dynamics Centre, Westmead Institute for Medical Research, Sydney, New South Wales 2145, Australia
- Speciality of Psychiatry, Sydney Medical School, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Mike E Le Pelley
- School of Psychology, University of New South Wales (UNSW Sydney), Sydney, New South Wales 2052, Australia
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Canberra 0200, Australian Capital Territory, Australia
| |
Collapse
|
2
|
Tsunada J, Wang X, Eliades SJ. Multiple processes of vocal sensory-motor interaction in primate auditory cortex. Nat Commun 2024; 15:3093. [PMID: 38600118 PMCID: PMC11006904 DOI: 10.1038/s41467-024-47510-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.
Collapse
Affiliation(s)
- Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Chinese Institute for Brain Research, Beijing, China
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
3
|
Tsunada J, Eliades SJ. Frontal-Auditory Cortical Interactions and Sensory Prediction During Vocal Production in Marmoset Monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577656. [PMID: 38352422 PMCID: PMC10862695 DOI: 10.1101/2024.01.28.577656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
The control of speech and vocal production involves the calculation of error between the intended vocal output and the resulting auditory feedback. Consistent with this model, recent evidence has demonstrated that the auditory cortex is suppressed immediately before and during vocal production, yet is still sensitive to differences between vocal output and altered auditory feedback. This suppression has been suggested to be the result of top-down signals containing information about the intended vocal output, potentially originating from motor or other frontal cortical areas. However, whether such frontal areas are the source of suppressive and predictive signaling to the auditory cortex during vocalization is unknown. Here, we simultaneously recorded neural activity from both the auditory and frontal cortices of marmoset monkeys while they produced self-initiated vocalizations. We found increases in neural activity in both brain areas preceding the onset of vocal production, notably changes in both multi-unit activity and local field potential theta-band power. Connectivity analysis using Granger causality demonstrated that frontal cortex sends directed signaling to the auditory cortex during this pre-vocal period. Importantly, this pre-vocal activity predicted both vocalization-induced suppression of the auditory cortex as well as the acoustics of subsequent vocalizations. These results suggest that frontal cortical areas communicate with the auditory cortex preceding vocal production, with frontal-auditory signals that may reflect the transmission of sensory prediction information. This interaction between frontal and auditory cortices may contribute to mechanisms that calculate errors between intended and actual vocal outputs during vocal communication.
Collapse
Affiliation(s)
- Joji Tsunada
- Chinese Institute for Brain Research, Beijing, China
- Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka, Iwate, Japan
| | - Steven J. Eliades
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
4
|
Echolocation-related reversal of information flow in a cortical vocalization network. Nat Commun 2022; 13:3642. [PMID: 35752629 PMCID: PMC9233670 DOI: 10.1038/s41467-022-31230-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 05/30/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
Collapse
|
5
|
Narrow and Broad γ Bands Process Complementary Visual Information in Mouse Primary Visual Cortex. eNeuro 2021; 8:ENEURO.0106-21.2021. [PMID: 34663617 PMCID: PMC8570688 DOI: 10.1523/eneuro.0106-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/03/2021] [Accepted: 06/22/2021] [Indexed: 11/21/2022] Open
Abstract
γ Band plays a key role in the encoding of visual features in the primary visual cortex (V1). In rodents V1 two ranges within the γ band are sensitive to contrast: a broad γ band (BB) increasing with contrast, and a narrow γ band (NB), peaking at ∼60 Hz, decreasing with contrast. The functional roles of the two bands and the neural circuits originating them are not completely clear yet. Here, we show, combining experimental and simulated data, that in mice V1 (1) BB carries information about high contrast and NB about low contrast; (2) BB modulation depends on excitatory-inhibitory interplay in the cortex, while NB modulation is because of entrainment to the thalamic drive. In awake mice presented with alternating gratings, NB power progressively decreased from low to intermediate levels of contrast where it reached a plateau. Conversely, BB power was constant across low levels of contrast, but it progressively increased from intermediate to high levels of contrast. Furthermore, BB response was stronger immediately after contrast reversal, while the opposite held for NB. These complementary modulations were reproduced by a recurrent excitatory-inhibitory leaky integrate-and-fire network provided that the thalamic inputs were composed of a sustained and a periodic component having complementary sensitivity ranges. These results show that in rodents the thalamic-driven NB plays a specific key role in encoding visual contrast. Moreover, we propose a simple and effective network model of response to visual stimuli in rodents that might help in investigating network dysfunctions of pathologic visual information processing.
Collapse
|