1
|
Tabari F, Patron C, Cryer H, Johari K. HD-tDCS over left supplementary motor area differentially modulated neural correlates of motor planning for speech vs. limb movement. Int J Psychophysiol 2024; 201:112357. [PMID: 38701898 DOI: 10.1016/j.ijpsycho.2024.112357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 04/15/2024] [Accepted: 04/30/2024] [Indexed: 05/05/2024]
Abstract
The supplementary motor area (SMA) is implicated in planning, execution, and control of speech production and limb movement. The SMA is among putative generators of pre-movement EEG activity which is thought to be neural markers of motor planning. In neurological conditions such as Parkinson's disease, abnormal pre-movement neural activity within the SMA has been reported during speech production and limb movement. Therefore, this region can be a potential target for non-invasive brain stimulation for both speech and limb movement. The present study took an initial step in examining the application of high-definition transcranial direct current stimulation (HD-tDCS) over the left SMA in 24 neurologically intact adults. Subsequently, event-related potentials (ERPs) were recorded while participants performed speech and limb movement tasks. Participants' data were collected in three counterbalanced sessions: anodal, cathodal and sham HD-tDCS. Relative to sham stimulation, anodal, but not cathodal, HD-tDCS significantly attenuated ERPs prior to the onset of the speech production. In contrast, neither anodal nor cathodal HD-tDCS significantly modulated ERPs prior to the onset of limb movement compared to sham stimulation. These findings showed that neural correlates of motor planning can be modulated using HD-tDCS over the left SMA in neurotypical adults, with translational implications for neurological conditions that impair speech production. The absence of a stimulation effect on ERPs prior to the onset of limb movement was not expected in this study, and future studies are warranted to further explore this effect.
Collapse
Affiliation(s)
- Fatemeh Tabari
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA
| | - Celeste Patron
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA
| | - Hope Cryer
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA
| | - Karim Johari
- Human Neurophysiology and Neuromodulation Lab, Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA.
| |
Collapse
|
2
|
Gu J, Buidze T, Zhao K, Gläscher J, Fu X. The neural network of sensory attenuation: A neuroimaging meta-analysis. Psychon Bull Rev 2024:10.3758/s13423-024-02532-1. [PMID: 38954157 DOI: 10.3758/s13423-024-02532-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2024] [Indexed: 07/04/2024]
Abstract
Sensory attenuation refers to the reduction in sensory intensity resulting from self-initiated actions compared to stimuli initiated externally. A classic example is scratching oneself without feeling itchy. This phenomenon extends across various sensory modalities, including visual, auditory, somatosensory, and nociceptive stimuli. The internal forward model proposes that during voluntary actions, an efferent copy of the action command is sent out to predict sensory feedback. This predicted sensory feedback is then compared with the actual sensory feedback, leading to the suppression or reduction of sensory stimuli originating from self-initiated actions. To further elucidate the neural mechanisms underlying sensory attenuation effect, we conducted an extensive meta-analysis of functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies. Utilizing activation likelihood estimation (ALE) analysis, our results revealed significant activations in a prominent cluster encompassing the right superior temporal gyrus (rSTG), right middle temporal gyrus (rMTG), and right insula when comparing external-generated with self-generated conditions. Additionally, significant activation was observed in the right anterior cerebellum when comparing self-generated to external-generated conditions. Further analysis using meta-analytic connectivity modeling (MACM) unveiled distinct brain networks co-activated with the rMTG and right cerebellum, respectively. Based on these findings, we propose that sensory attenuation arises from the suppression of reflexive inputs elicited by self-initiated actions through the internal forward modeling of a cerebellum-centered action prediction network, enabling the "sensory conflict detection" regions to effectively discriminate between inputs resulting from self-induced actions and those originating externally.
Collapse
Affiliation(s)
- Jingjin Gu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, 100049, China
| | - Tatia Buidze
- Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, 100049, China.
| | - Jan Gläscher
- Institute for Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, 100049, China
| |
Collapse
|
3
|
Kent RD. The Feel of Speech: Multisystem and Polymodal Somatosensation in Speech Production. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1424-1460. [PMID: 38593006 DOI: 10.1044/2024_jslhr-23-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.
Collapse
|
4
|
Teghipco A, Okada K, Murphy E, Hickok G. Predictive Coding and Internal Error Correction in Speech Production. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:81-119. [PMID: 37229143 PMCID: PMC10205072 DOI: 10.1162/nol_a_00088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 11/02/2022] [Indexed: 05/27/2023]
Abstract
Speech production involves the careful orchestration of sophisticated systems, yet overt speech errors rarely occur under naturalistic conditions. The present functional magnetic resonance imaging study sought neural evidence for internal error detection and correction by leveraging a tongue twister paradigm that induces the potential for speech errors while excluding any overt errors from analysis. Previous work using the same paradigm in the context of silently articulated and imagined speech production tasks has demonstrated forward predictive signals in auditory cortex during speech and presented suggestive evidence of internal error correction in left posterior middle temporal gyrus (pMTG) on the basis that this area tended toward showing a stronger response when potential speech errors are biased toward nonwords compared to words (Okada et al., 2018). The present study built on this prior work by attempting to replicate the forward prediction and lexicality effects in nearly twice as many participants but introduced novel stimuli designed to further tax internal error correction and detection mechanisms by biasing speech errors toward taboo words. The forward prediction effect was replicated. While no evidence was found for a significant difference in brain response as a function of lexical status of the potential speech error, biasing potential errors toward taboo words elicited significantly greater response in left pMTG than biasing errors toward (neutral) words. Other brain areas showed preferential response for taboo words as well but responded below baseline and were less likely to reflect language processing as indicated by a decoding analysis, implicating left pMTG in internal error correction.
Collapse
Affiliation(s)
- Alex Teghipco
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| | - Kayoko Okada
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, USA
| | - Emma Murphy
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| |
Collapse
|
5
|
Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention. Sci Rep 2022; 12:18789. [PMID: 36335137 PMCID: PMC9637225 DOI: 10.1038/s41598-022-22041-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.
Collapse
|
6
|
Shamma S, Patel P, Mukherjee S, Marion G, Khalighinejad B, Han C, Herrero J, Bickel S, Mehta A, Mesgarani N. Learning Speech Production and Perception through Sensorimotor Interactions. Cereb Cortex Commun 2020; 2:tgaa091. [PMID: 33506209 PMCID: PMC7811190 DOI: 10.1093/texcom/tgaa091] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 11/19/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022] Open
Abstract
Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.
Collapse
Affiliation(s)
- Shihab Shamma
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Prachi Patel
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Cong Han
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jose Herrero
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh Mehta
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
- The Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| |
Collapse
|
7
|
Floegel M, Fuchs S, Kell CA. Differential contributions of the two cerebral hemispheres to temporal and spectral speech feedback control. Nat Commun 2020; 11:2839. [PMID: 32503986 PMCID: PMC7275068 DOI: 10.1038/s41467-020-16743-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 05/21/2020] [Indexed: 11/16/2022] Open
Abstract
Proper speech production requires auditory speech feedback control. Models of speech production associate this function with the right cerebral hemisphere while the left hemisphere is proposed to host speech motor programs. However, previous studies have investigated only spectral perturbations of the auditory speech feedback. Since auditory perception is known to be lateralized, with right-lateralized analysis of spectral features and left-lateralized processing of temporal features, it is unclear whether the observed right-lateralization of auditory speech feedback processing reflects a preference for speech feedback control or for spectral processing in general. Here we use a behavioral speech adaptation experiment with dichotically presented altered auditory feedback and an analogous fMRI experiment with binaurally presented altered feedback to confirm a right hemisphere preference for spectral feedback control and to reveal a left hemisphere preference for temporal feedback control during speaking. These results indicate that auditory feedback control involves both hemispheres with differential contributions along the spectro-temporal axis.
Collapse
Affiliation(s)
- Mareike Floegel
- Cognitive Neuroscience Group, Brain Imaging Center and Department of Neurology, Goethe University, Schleusenweg 2-16, 60528, Frankfurt, Germany
| | - Susanne Fuchs
- Leibniz-Centre General Linguistics (ZAS), Schuetzenstr. 18, 10117, Berlin, Germany
| | - Christian A Kell
- Cognitive Neuroscience Group, Brain Imaging Center and Department of Neurology, Goethe University, Schleusenweg 2-16, 60528, Frankfurt, Germany.
| |
Collapse
|
8
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
9
|
Abstract
Human speech perception is a paradigm example of the complexity of human linguistic processing; however, it is also the dominant way of expressing vocal identity and is critically important for social interactions. Here, I review the ways that the speech, the talker, and the social nature of speech interact and how this may be computed in the human brain, using models and approaches from nonhuman primate studies. I explore the extent to which domain-general approaches may be able to account for some of these neural findings. Finally, I address the importance of extending these findings into a better understanding of the social use of speech in conversations.
Collapse
Affiliation(s)
- Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
10
|
Grandchamp R, Rapin L, Perrone-Bertolotti M, Pichat C, Haldin C, Cousin E, Lachaux JP, Dohen M, Perrier P, Garnier M, Baciu M, Lœvenbruck H. The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework. Front Psychol 2019; 10:2019. [PMID: 31620039 PMCID: PMC6759632 DOI: 10.3389/fpsyg.2019.02019] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 08/19/2019] [Indexed: 11/19/2022] Open
Abstract
Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.
Collapse
Affiliation(s)
- Romain Grandchamp
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Lucile Rapin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | | | - Cédric Pichat
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Célise Haldin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Emilie Cousin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Jean-Philippe Lachaux
- INSERM U1028, CNRS UMR5292, Brain Dynamics and Cognition Team, Lyon Neurosciences Research Center, Bron, France
| | - Marion Dohen
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Maëva Garnier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Monica Baciu
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Hélène Lœvenbruck
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| |
Collapse
|
11
|
Yamamoto AK, Parker Jones O, Hope TMH, Prejawa S, Oberhuber M, Ludersdorfer P, Yousry TA, Green DW, Price CJ. A special role for the right posterior superior temporal sulcus during speech production. Neuroimage 2019; 203:116184. [PMID: 31520744 PMCID: PMC6876272 DOI: 10.1016/j.neuroimage.2019.116184] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/19/2019] [Accepted: 09/10/2019] [Indexed: 11/29/2022] Open
Abstract
This fMRI study of 24 healthy human participants investigated whether any part of the auditory cortex was more responsive to self-generated speech sounds compared to hearing another person speak. The results demonstrate a double dissociation in two different parts of the auditory cortex. In the right posterior superior temporal sulcus (RpSTS), activation was higher during speech production than listening to auditory stimuli, whereas in bilateral superior temporal gyri (STG), activation was higher for listening to auditory stimuli than during speech production. In the second part of the study, we investigated the function of the identified regions, by examining how activation changed across a range of listening and speech production tasks that systematically varied the demands on acoustic, semantic, phonological and orthographic processing. In RpSTS, activation during auditory conditions was higher in the absence of semantic cues, plausibly indicating increased attention to the spectral-temporal features of auditory inputs. In addition, RpSTS responded in the absence of any auditory inputs when participants were making one-back matching decisions on visually presented pseudowords. After analysing the influence of visual, phonological, semantic and orthographic processing, we propose that RpSTS (i) contributes to short term memory of speech sounds as well as (ii) spectral-temporal processing of auditory input and (iii) may play a role in integrating auditory expectations with auditory input. In contrast, activation in bilateral STG was sensitive to acoustic input and did not respond in the absence of auditory input. The special role of RpSTS during speech production therefore merits further investigation if we are to fully understand the neural mechanisms supporting speech production during speech acquisition, adult life, hearing loss and after brain injury. In right auditory cortex, a region is more sensitive to own than another’s speech. This region (RpSTS) responds to phonological input in the absence of auditory input. RpSTS may match auditory feedback with internal representations of speech sounds.
Collapse
Affiliation(s)
- Adam Kenji Yamamoto
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, Queen Square, London, United Kingdom.
| | - Oiwi Parker Jones
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom; FMRIB Centre and Wolfson College, University of Oxford, Oxford, United Kingdom.
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom.
| | - Susan Prejawa
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom; Collaborative Research Centre 1052 "Obesity Mechanisms", Faculty of Medicine, University of Leipzig, Leipzig, Germany; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Marion Oberhuber
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom.
| | - Philipp Ludersdorfer
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom.
| | - Tarek A Yousry
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, Queen Square, London, United Kingdom.
| | - David W Green
- Experimental Psychology, University College London, London, United Kingdom.
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, Queen Square, London, United Kingdom.
| |
Collapse
|
12
|
Whitford TJ. Speaking-Induced Suppression of the Auditory Cortex in Humans and Its Relevance to Schizophrenia. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2019; 4:791-804. [PMID: 31399393 DOI: 10.1016/j.bpsc.2019.05.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/21/2019] [Accepted: 05/22/2019] [Indexed: 01/13/2023]
Abstract
Speaking-induced suppression (SIS) is the phenomenon that the sounds one generates by overt speech elicit a smaller neurophysiological response in the auditory cortex than comparable sounds that are externally generated. SIS is a specific example of the more general phenomenon of self-suppression. SIS has been well established in nonhuman animals and is believed to involve the action of corollary discharges. This review summarizes, first, the evidence for SIS in heathy human participants, where it has been most commonly assessed with electroencephalography and/or magnetoencephalography using an experimental paradigm known as "Talk-Listen"; and second, the growing number of Talk-Listen studies that have reported subnormal levels of SIS in patients with schizophrenia. This result is theoretically significant, as it provides a plausible explanation for some of the most distinctive and characteristic symptoms of schizophrenia, namely the first-rank symptoms. In particular, while the failure to suppress the neural consequences of self-generated movements (such as those associated with overt speech) provides a prima facie explanation for delusions of control, the failure to suppress the neural consequences of self-generated inner speech provides a plausible explanation for certain classes of auditory-verbal hallucinations, such as audible thoughts. While the empirical evidence for a relationship between SIS and the first-rank symptoms is currently limited, I predict that future studies with more sensitive experimental designs will confirm its existence. Establishing the existence of a causal, mechanistic relationship would represent a major step forward in our understanding of schizophrenia, which is a necessary precursor to the development of novel treatments.
Collapse
Affiliation(s)
- Thomas J Whitford
- School of Psychology, The University of New South Wales, Sydney, New South Wales, Australia.
| |
Collapse
|
13
|
Interaction of the effects associated with auditory-motor integration and attention-engaging listening tasks. Neuropsychologia 2019; 124:322-336. [PMID: 30444980 DOI: 10.1016/j.neuropsychologia.2018.11.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 11/08/2018] [Indexed: 11/22/2022]
Abstract
A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.
Collapse
|
14
|
van Rootselaar NA, Flindall JW, Gonzalez CLR. Hear speech, change your reach: changes in the left-hand grasp-to-eat action during speech processing. Exp Brain Res 2018; 236:3267-3277. [PMID: 30229305 DOI: 10.1007/s00221-018-5376-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Accepted: 09/06/2018] [Indexed: 11/29/2022]
Abstract
Research has shown that the kinematic characteristics of right-hand movements change when executed during both speech production and processing. Despite the variety of prehension and manual actions used to examine this relationship, the literature has yet to examine potential movement effects using an action with a distinct kinematic signature: the hand-to-mouth (grasp-to-eat) action. In this study, participants performed grasp-to-eat and grasp-to-place actions in (a) a quiet environment and (b) while processing speech. Results during the quiet condition replicated the previous findings; consistently smaller grasp-to-eat (compared to grasp-to-place), maximum grip apertures appeared only when using the right hand. Interestingly, in the listen condition, smaller maximum grip apertures in the grasp-to-eat movement appeared in both the right and left hands, despite the fact that participants were right-handed. This paper addresses these results in relation with similar behaviour observed in children, and discusses implications for functional lateralization and neural organization.
Collapse
Affiliation(s)
- Nicole A van Rootselaar
- The Brain in Action Laboratory, Department of Kinesiology and Physical Education, University of Lethbridge, Lethbridge, AB, T1K 3M4, Canada.
| | - Jason W Flindall
- The Brain in Action Laboratory, Department of Kinesiology and Physical Education, University of Lethbridge, Lethbridge, AB, T1K 3M4, Canada
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Department of Kinesiology and Physical Education, University of Lethbridge, Lethbridge, AB, T1K 3M4, Canada
| |
Collapse
|
15
|
Agnew ZK, McGettigan C, Banks B, Scott SK. Group and individual variability in speech production networks during delayed auditory feedback. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:3009. [PMID: 29857719 PMCID: PMC5963950 DOI: 10.1121/1.5026500] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2016] [Revised: 02/05/2018] [Accepted: 02/12/2018] [Indexed: 06/08/2023]
Abstract
Altering reafferent sensory information can have a profound effect on motor output. Introducing a short delay [delayed auditory feedback (DAF)] during speech production results in modulations of voice and loudness, and produces a range of speech dysfluencies. The ability of speakers to resist the effects of delayed feedback is variable yet it is unclear what neural processes underlie differences in susceptibility to DAF. Here, susceptibility to DAF is investigated by looking at the neural basis of within and between subject changes in speech fluency under 50 and 200 ms delay conditions. Using functional magnetic resonance imaging, networks involved in producing speech under two levels of DAF were identified, lying largely within networks active during normal speech production. Independent of condition, fluency ratings were associated with midbrain activity corresponding to periaqueductal grey matter. Across subject variability in ability to produce normal sounding speech under a 200 ms delay was associated with activity in ventral sensorimotor cortices, whereas ability to produce normal sounding speech under a 50 ms delay was associated with left inferior frontal gyrus activity. These data indicate whilst overlapping cortical mechanisms are engaged for speaking under different delay conditions, susceptibility to different temporal delays in speech feedback may involve different processes.
Collapse
Affiliation(s)
- Z K Agnew
- Institute for Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - C McGettigan
- Institute for Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - B Banks
- Institute for Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - S K Scott
- Institute for Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| |
Collapse
|
16
|
Carey D, Krishnan S, Callaghan MF, Sereno MI, Dick F. Functional and Quantitative MRI Mapping of Somatomotor Representations of Human Supralaryngeal Vocal Tract. Cereb Cortex 2018; 27:265-278. [PMID: 28069761 PMCID: PMC5808730 DOI: 10.1093/cercor/bhw393] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Indexed: 12/15/2022] Open
Abstract
Speech articulation requires precise control of and coordination between the effectors of the vocal tract (e.g., lips, tongue, soft palate, and larynx). However, it is unclear how the cortex represents movements of and contact between these effectors during speech, or how these cortical responses relate to inter-regional anatomical borders. Here, we used phase-encoded fMRI to map somatomotor representations of speech articulations. Phonetically trained participants produced speech phones, progressing from front (bilabial) to back (glottal) place of articulation. Maps of cortical myelin proxies (R1 = 1/T1) further allowed us to situate functional maps with respect to anatomical borders of motor and somatosensory regions. Across participants, we found a consistent topological map of place of articulation, spanning the central sulcus and primary motor and somatosensory areas, that moved from lateral to inferior as place of articulation progressed from front to back. Phones produced at velar and glottal places of articulation activated the inferior aspect of the central sulcus, but with considerable across-subject variability. R1 maps for a subset of participants revealed that articulator maps extended posteriorly into secondary somatosensory regions. These results show consistent topological organization of cortical representations of the vocal apparatus in the context of speech behavior.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, London, TW20 0EX, UK.,The Irish Longitudinal Study on Ageing, Department of Medical Gerontology, Trinity College Dublin, Dublin 2, Ireland.,Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London, WC1E 7HX, UK
| | - Saloni Krishnan
- Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London, WC1E 7HX, UK.,Department of Experimental Psychology, Tinbergen Building, 9 South Parks Road, Oxford, OX1 3UD, UK
| | - Martina F Callaghan
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG, UK
| | - Martin I Sereno
- Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London, WC1E 7HX, UK.,Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London, WC1H 0AP, UK.,Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London, WC1H 0AP, UK.,Department of Psychology, College of Sciences, San Diego State University, 5500 Campanile Drive, San Diego, CA 92182-4611, USA
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London, WC1E 7HX, UK.,Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London, WC1H 0AP, UK
| |
Collapse
|
17
|
Cohesion and Joint Speech: Right Hemisphere Contributions to Synchronized Vocal Production. J Neurosci 2017; 36:4669-80. [PMID: 27122026 DOI: 10.1523/jneurosci.4075-15.2016] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Accepted: 02/18/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohesion. In the present study, we measured neural activity using fMRI while participants spoke simultaneously with another person. We manipulated whether the couple spoke the same sentence (allowing synchrony) or different sentences (preventing synchrony), and also whether the voice the participant heard was "live" (allowing rich reciprocal interaction) or prerecorded (with no such mutual influence). Synchronous speech was associated with increased activity in posterior and anterior auditory fields. When, and only when, participants spoke with a partner who was both synchronous and "live," we observed a lack of the suppression of auditory cortex, which is commonly seen as a neural correlate of speech production. Instead, auditory cortex responded as though it were processing another talker's speech. Our results suggest that detecting synchrony leads to a change in the perceptual consequences of one's own actions: they are processed as though they were other-, rather than self-produced. This may contribute to our understanding of synchronized behavior as a group-bonding tool. SIGNIFICANCE STATEMENT Synchronized human behavior, such as chanting, dancing, and singing, are cultural universals with functional significance: these activities increase group cohesion and cause participants to like each other and behave more prosocially toward each other. Here we use fMRI brain imaging to investigate the neural basis of one common form of cohesive synchronized behavior: joint speaking (e.g., the synchronous speech seen in chants, prayers, pledges). Results showed that joint speech recruits additional right hemisphere regions outside the classic speech production network. Additionally, we found that a neural marker of self-produced speech, suppression of sensory cortices, did not occur during joint synchronized speech, suggesting that joint synchronized behavior may alter self-other distinctions in sensory processing.
Collapse
|
18
|
Eliades SJ, Wang X. Contributions of sensory tuning to auditory-vocal interactions in marmoset auditory cortex. Hear Res 2017; 348:98-111. [PMID: 28284736 DOI: 10.1016/j.heares.2017.03.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/27/2017] [Accepted: 03/02/2017] [Indexed: 01/30/2023]
Abstract
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing. How underlying auditory tuning properties of these neurons might contribute to this sensory-motor processing is unknown. In the present study, we quantitatively compared marmoset auditory cortex neural activities during vocal production with those during passive listening. We found that neurons excited during vocalization were readily driven by passive playback of vocalizations and other acoustic stimuli. In contrast, neurons suppressed during vocalization exhibited more diverse playback responses, including responses that were not predictable by auditory tuning properties. These results suggest that vocalization-related excitation in auditory cortex is largely a sensory-driven response. In contrast, vocalization-induced suppression is not well predicted by a neuron's auditory responses, supporting the prevailing theory that internal motor-related signals contribute to the auditory-vocal interaction observed in auditory cortex.
Collapse
Affiliation(s)
- Steven J Eliades
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
19
|
Kell CA, Darquea M, Behrens M, Cordani L, Keller C, Fuchs S. Phonetic detail and lateralization of reading-related inner speech and of auditory and somatosensory feedback processing during overt reading. Hum Brain Mapp 2016; 38:493-508. [PMID: 27622923 DOI: 10.1002/hbm.23398] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Revised: 08/29/2016] [Accepted: 08/30/2016] [Indexed: 11/07/2022] Open
Abstract
Phonetic detail and lateralization of inner speech during covert sentence reading as well as overt reading in 32 right-handed healthy participants undergoing 3T fMRI were investigated. The number of voiceless and voiced consonants in the processed sentences was systematically varied. Participants listened to sentences, read them covertly, silently mouthed them while reading, and read them overtly. Condition comparisons allowed for the study of effects of externally versus self-generated auditory input and of somatosensory feedback related to or independent of voicing. In every condition, increased voicing modulated bilateral voice-selective regions in the superior temporal sulcus without any lateralization. The enhanced temporal modulation and/or higher spectral frequencies of sentences rich in voiceless consonants induced left-lateralized activation of phonological regions in the posterior temporal lobe, regardless of condition. These results provide evidence that inner speech during reading codes detail as fine as consonant voicing. Our findings suggest that the fronto-temporal internal loops underlying inner speech target different temporal regions. These regions differ in their sensitivity to inner or overt acoustic speech features. More slowly varying acoustic parameters are represented more anteriorly and bilaterally in the temporal lobe while quickly changing acoustic features are processed in more posterior left temporal cortices. Furthermore, processing of external auditory feedback during overt sentence reading was sensitive to consonant voicing only in the left superior temporal cortex. Voicing did not modulate left-lateralized processing of somatosensory feedback during articulation or bilateral motor processing. This suggests voicing is primarily monitored in the auditory rather than in the somatosensory feedback channel. Hum Brain Mapp 38:493-508, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Christian A Kell
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Maritza Darquea
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Marion Behrens
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Lorenzo Cordani
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Christian Keller
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Susanne Fuchs
- Center for General Linguistics, Schuetzenstrasse 18, Berlin, 10117, Germany
| |
Collapse
|
20
|
Lima CF, Krishnan S, Scott SK. Roles of Supplementary Motor Areas in Auditory Processing and Auditory Imagery. Trends Neurosci 2016; 39:527-542. [PMID: 27381836 PMCID: PMC5441995 DOI: 10.1016/j.tins.2016.06.003] [Citation(s) in RCA: 136] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 05/26/2016] [Accepted: 06/09/2016] [Indexed: 11/28/2022]
Abstract
Although the supplementary and pre-supplementary motor areas have been intensely investigated in relation to their motor functions, they are also consistently reported in studies of auditory processing and auditory imagery. This involvement is commonly overlooked, in contrast to lateral premotor and inferior prefrontal areas. We argue here for the engagement of supplementary motor areas across a variety of sound categories, including speech, vocalizations, and music, and we discuss how our understanding of auditory processes in these regions relate to findings and hypotheses from the motor literature. We suggest that supplementary and pre-supplementary motor areas play a role in facilitating spontaneous motor responses to sound, and in supporting a flexible engagement of sensorimotor processes to enable imagery and to guide auditory perception. Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions. Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery tasks. Responses during auditory processing primarily peak in pre-SMA and in the boundary area between pre-SMA and SMA. This boundary area is crucially involved in the control of speech and vocal production, suggesting that sounds engage this region in an effector-specific manner. Activating sound-related motor representations in SMA and pre-SMA might facilitate behavioral responses to sounds. This might also support a flexible generation of sensory predictions based on previous experience to enable imagery and guide perception.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Saloni Krishnan
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
21
|
Meekings S, Evans S, Lavan N, Boebinger D, Krieger-Redwood K, Cooke M, Scott SK. Distinct neural systems recruited when speech production is modulated by different masking sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:8. [PMID: 27475128 DOI: 10.1121/1.4948587] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
When talkers speak in masking sounds, their speech undergoes a variety of acoustic and phonetic changes. These changes are known collectively as the Lombard effect. Most behavioural research and neuroimaging research in this area has concentrated on the effect of energetic maskers such as white noise on Lombard speech. Previous fMRI studies have argued that neural responses to speaking in noise are driven by the quality of auditory feedback-that is, the audibility of the speaker's voice over the masker. However, we also frequently produce speech in the presence of informational maskers such as another talker. Here, speakers read sentences over a range of maskers varying in their informational and energetic content: speech, rotated speech, speech modulated noise, and white noise. Subjects also spoke in quiet and listened to the maskers without speaking. When subjects spoke in masking sounds, their vocal intensity increased in line with the energetic content of the masker. However, the opposite pattern was found neurally. In the superior temporal gyrus, activation was most strongly associated with increases in informational, rather than energetic, masking. This suggests that the neural activations associated with speaking in noise are more complex than a simple feedback response.
Collapse
Affiliation(s)
- Sophie Meekings
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Nadine Lavan
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Dana Boebinger
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Katya Krieger-Redwood
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Martin Cooke
- University of the Basque Country, Facultad de Letras, Universidad del País Vasco/EHU, Paseo de la Universidad 5, Vitoria, Alava 01006, Spain
| | - Sophie K Scott
- Psychology and Language Sciences, University College London, Gower Street, London WC1E 6BT, United Kingdom
| |
Collapse
|
22
|
Hurlburt RT, Alderson-Day B, Kühn S, Fernyhough C. Exploring the Ecological Validity of Thinking on Demand: Neural Correlates of Elicited vs. Spontaneously Occurring Inner Speech. PLoS One 2016; 11:e0147932. [PMID: 26845028 PMCID: PMC4741522 DOI: 10.1371/journal.pone.0147932] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2015] [Accepted: 01/11/2016] [Indexed: 12/03/2022] Open
Abstract
Psychology and cognitive neuroscience often use standardized tasks to elicit particular experiences. We explore whether elicited experiences are similar to spontaneous experiences. In an MRI scanner, five participants performed tasks designed to elicit inner speech (covertly repeating experimenter-supplied words), inner seeing, inner hearing, feeling, and sensing. Then, in their natural environments, participants were trained in four days of random-beep-triggered Descriptive Experience Sampling (DES). They subsequently returned to the scanner for nine 25-min resting-state sessions; during each they received four DES beeps and described those moments (9 × 4 = 36 moments per participant) of spontaneously occurring experience. Enough of those moments included spontaneous inner speech to allow us to compare brain activation during spontaneous inner speech with what we had found in task-elicited inner speech. ROI analysis was used to compare activation in two relevant areas (Heschl’s gyrus and left inferior frontal gyrus). Task-elicited inner speech was associated with decreased activation in Heschl’s gyrus and increased activation in left inferior frontal gyrus. However, spontaneous inner speech had the opposite effect in Heschl’s gyrus and no significant effect in left inferior frontal gyrus. This study demonstrates how spontaneous phenomena can be investigated in MRI and calls into question the assumption that task-created phenomena are often neurophysiologically and psychologically similar to spontaneously occurring phenomena.
Collapse
Affiliation(s)
- Russell T. Hurlburt
- Psychology, University of Nevada Las Vegas, Las Vegas, Nevada, United States of America
- * E-mail:
| | | | - Simone Kühn
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | | |
Collapse
|
23
|
Meekings S, Boebinger D, Evans S, Lima CF, Chen S, Ostarek M, Scott SK. Do We Know What We're Saying? The Roles of Attention and Sensory Information During Speech Production. Psychol Sci 2015; 26:1975-7. [PMID: 26464309 PMCID: PMC4871256 DOI: 10.1177/0956797614563766] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Accepted: 11/20/2014] [Indexed: 11/16/2022] Open
Affiliation(s)
- Sophie Meekings
- Institute of Cognitive Neuroscience, University College London
| | - Dana Boebinger
- Institute of Cognitive Neuroscience, University College London
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London
| | - César F Lima
- Institute of Cognitive Neuroscience, University College London
| | - Sinead Chen
- Institute of Cognitive Neuroscience, University College London
| | - Markus Ostarek
- Institute of Cognitive Neuroscience, University College London
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London
| |
Collapse
|
24
|
Wikman PA, Vainio L, Rinne T. The effect of precision and power grips on activations in human auditory cortex. Front Neurosci 2015; 9:378. [PMID: 26528121 PMCID: PMC4606019 DOI: 10.3389/fnins.2015.00378] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 09/28/2015] [Indexed: 11/23/2022] Open
Abstract
The neuroanatomical pathways interconnecting auditory and motor cortices play a key role in current models of human auditory cortex (AC). Evidently, auditory-motor interaction is important in speech and music production, but the significance of these cortical pathways in other auditory processing is not well known. We investigated the general effects of motor responding on AC activations to sounds during auditory and visual tasks (motor regions were not imaged). During all task blocks, subjects detected targets in the designated modality, reported the relative number of targets at the end of the block, and ignored the stimuli presented in the opposite modality. In each block, they were also instructed to respond to targets either using a precision grip, power grip, or to give no overt target responses. We found that motor responding strongly modulated AC activations. First, during both visual and auditory tasks, activations in widespread regions of AC decreased when subjects made precision and power grip responses to targets. Second, activations in AC were modulated by grip type during the auditory but not during the visual task. Further, the motor effects were distinct from the present strong attention-related modulations in AC. These results are consistent with the idea that operations in AC are shaped by its connections with motor cortical regions.
Collapse
Affiliation(s)
- Patrik A Wikman
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Lari Vainio
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Advanced Magnetic Imaging Centre, Aalto University School of Science Espoo, Finland
| |
Collapse
|
25
|
Jenson D, Harkrider AW, Thornton D, Bowers AL, Saltuklaroglu T. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm. Front Hum Neurosci 2015; 9:534. [PMID: 26500519 PMCID: PMC4597480 DOI: 10.3389/fnhum.2015.00534] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 09/14/2015] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
26
|
Friston KJ, Frith CD. Active inference, communication and hermeneutics. Cortex 2015; 68:129-43. [PMID: 25957007 PMCID: PMC4502445 DOI: 10.1016/j.cortex.2015.03.025] [Citation(s) in RCA: 125] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2014] [Revised: 12/06/2014] [Accepted: 03/27/2015] [Indexed: 11/16/2022]
Abstract
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom.
| | - Christopher D Frith
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom
| |
Collapse
|
27
|
Krishnan S, Leech R, Mercure E, Lloyd-Fox S, Dick F. Convergent and Divergent fMRI Responses in Children and Adults to Increasing Language Production Demands. Cereb Cortex 2014; 25:3261-77. [PMID: 24907249 PMCID: PMC4585486 DOI: 10.1093/cercor/bhu120] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity.
Collapse
Affiliation(s)
- Saloni Krishnan
- Birkbeck-UCL Centre for NeuroImaging, London, UK Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Robert Leech
- Department of Neurosciences and Mental Health, Imperial College London, London, UK
| | | | - Sarah Lloyd-Fox
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Frederic Dick
- Birkbeck-UCL Centre for NeuroImaging, London, UK Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| |
Collapse
|
28
|
Cogan GB, Thesen T, Carlson C, Doyle W, Devinsky O, Pesaran B. Sensory-motor transformations for speech occur bilaterally. Nature 2014; 507:94-8. [PMID: 24429520 PMCID: PMC4000028 DOI: 10.1038/nature12935] [Citation(s) in RCA: 166] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2013] [Accepted: 12/02/2013] [Indexed: 11/17/2022]
Abstract
Historically, the study of speech processing has emphasized a strong link between auditory perceptual input and motor production output1–4. A kind of ‘parity’ is essential, as both perception- and production-based representations must form a unified interface to facilitate access to higher order language processes such as syntax and semantics, believed to be computed in the dominant, typically left hemisphere5,6. While various theories have been proposed to unite perception and production2,7, the underlying neural mechanisms are unclear. Early models of speech and language processing proposed that perceptual processing occurred in the left posterior superior temporal gyrus (Wernicke’s area) and motor production processes occurred in the left inferior frontal gyrus (Broca’s area)8,9. Sensory activity was proposed to link to production activity via connecting fiber tracts, forming the left lateralized speech sensory-motor system10. While recent evidence indicates that speech perception occurs bilaterally11–13, prevailing models maintain that the speech sensory-motor system is left lateralized11,14–18 and facilitates the transformation from sensory-based auditory representations to motor-based production representations11,15,16. Evidence for the lateralized computation of sensory-motor speech transformations is, however, indirect and primarily comes from lesion patients with speech repetition deficits (conduction aphasia) and studies using covert speech and hemodynamic functional imaging16,19. Whether the speech sensory-motor system is lateralized like higher order language processes, or bilateral, like speech perception is controversial. Here, using direct neural recordings in subjects performing sensory-motor tasks involving overt speech production, we show that sensory-motor transformations occur bilaterally. We demonstrate that electrodes over bilateral inferior frontal, inferior parietal, superior temporal, premotor, and somatosensory cortices exhibit robust sensory-motor neural responses during both perception and production in an overt word repetition task. Using a non-word transformation task, we show that bilateral sensory-motor responses can perform transformations between speech perception- and production-based representations. These results establish a bilateral sublexical speech sensory-motor system.
Collapse
Affiliation(s)
- Gregory B Cogan
- Center for Neural Science, New York University, New York, New York 10003, USA
| | - Thomas Thesen
- Department of Neurology, New York University School of Medicine, New York, New York 10016, USA
| | - Chad Carlson
- 1] Department of Neurology, New York University School of Medicine, New York, New York 10016, USA [2] Medical College of Wisconsin, Milwaukee, Wisconsin 53226, USA
| | - Werner Doyle
- Department of Neurosurgery, New York University School of Medicine, New York, New York 10016, USA
| | - Orrin Devinsky
- 1] Department of Neurology, New York University School of Medicine, New York, New York 10016, USA [2] Department of Neurosurgery, New York University School of Medicine, New York, New York 10016, USA
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York, New York 10003, USA
| |
Collapse
|