1
|
De Ridder D, Adhia D, Vanneste S. The brain's duck test in phantom percepts: Multisensory congruence in neuropathic pain and tinnitus. Brain Res 2024:149137. [PMID: 39103069 DOI: 10.1016/j.brainres.2024.149137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 06/26/2024] [Accepted: 08/01/2024] [Indexed: 08/07/2024]
Abstract
Chronic neuropathic pain and chronic tinnitus have been likened to phantom percepts, in which a complete or partial sensory deafferentation results in a filling in of the missing information derived from memory. 150 participants, 50 with tinnitus, 50 with chronic pain and 50 healthy controls underwent a resting state EEG. Source localized current density is recorded from all the sensory cortices (olfactory, gustatory, somatosensory, auditory, vestibular, visual) as well as the parahippocampal area. Functional connectivity by means of lagged phase synchronization is also computed between these regions of interest. Pain and tinnitus are associated with gamma band activity, reflecting prediction errors, in all sensory cortices except the olfactory and gustatory cortex. Functional connectivity identifies theta frequency connectivity between each of the sensory cortices except the chemical senses to the parahippocampus, but not between the individual sensory cortices. When one sensory domain is deprived, the other senses may provide the parahippocampal 'contextual' area with the most likely sound or somatosensory sensation to fill in the gap, applying an abductive 'duck test' approach, i.e., based on stored multisensory congruence. This novel concept paves the way to develop novel treatments for pain and tinnitus, using multisensory (i.e. visual, vestibular, somatosensory, auditory) modulation with or without associated parahippocampal targeting.
Collapse
Affiliation(s)
- Dirk De Ridder
- Unit of Neurosurgery, Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand
| | - Divya Adhia
- Unit of Neurosurgery, Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand
| | - Sven Vanneste
- School of Psychology, Trinity College Dublin, Dublin, Ireland; Global Brain Health Institute & Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland. https://www.lab-clint.org
| |
Collapse
|
2
|
Kent RD. The Feel of Speech: Multisystem and Polymodal Somatosensation in Speech Production. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1424-1460. [PMID: 38593006 DOI: 10.1044/2024_jslhr-23-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.
Collapse
|
3
|
Ashokumar M, Schwartz JL, Ito T. Changes in Speech Production Following Perceptual Training With Orofacial Somatosensory Inputs. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024:1-12. [PMID: 38497731 DOI: 10.1044/2023_jslhr-23-00249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
PURPOSE Orofacial somatosensory inputs play an important role in speech motor control and speech learning. Since receiving specific auditory-somatosensory inputs during speech perceptual training alters speech perception, similar perceptual training could also alter speech production. We examined whether the production performance was changed by perceptual training with orofacial somatosensory inputs. METHOD We focused on the French vowels /e/ and /ø/, contrasted in their articulation by horizontal gestures. Perceptual training consisted of a vowel identification task contrasting /e/ and /ø/. Along with training, for the first group of participants, somatosensory stimulation was applied as facial skin stretch in backward direction. We recorded the target vowels uttered by the participants before and after the perceptual training and compared their F1, F2, and F3 formants. We also tested a control group with no somatosensory stimulation and another somatosensory group with a different vowel continuum (/e/-/i/) for perceptual training. RESULTS Perceptual training with somatosensory stimulation induced changes in F2 and F3 in the produced vowel sounds. F2 decreased consistently in the two somatosensory groups. F3 increased following the /e/-/ø/ training and decreased following the /e/-/i/ training. F2 change was significantly correlated with the perceptual shift between the first and second half of the training phase in the somatosensory group with the /e/-/ø/ training, but not with the /e/-/i/ training. The control group displayed no effect on F2 and F3, and just a tendency of F1 increase. CONCLUSION The results suggest that somatosensory inputs associated to speech sound inputs can play a role in speech training and learning in both production and perception.
Collapse
Affiliation(s)
| | | | - Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, France
| |
Collapse
|
4
|
De Ridder D, Friston K, Sedley W, Vanneste S. A parahippocampal-sensory Bayesian vicious circle generates pain or tinnitus: a source-localized EEG study. Brain Commun 2023; 5:fcad132. [PMID: 37223127 PMCID: PMC10202557 DOI: 10.1093/braincomms/fcad132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/14/2023] [Accepted: 04/19/2023] [Indexed: 05/25/2023] Open
Abstract
Pain and tinnitus share common pathophysiological mechanisms, clinical features, and treatment approaches. A source-localized resting-state EEG study was conducted in 150 participants: 50 healthy controls, 50 pain, and 50 tinnitus patients. Resting-state activity as well as functional and effective connectivity was computed in source space. Pain and tinnitus were characterized by increased theta activity in the pregenual anterior cingulate cortex, extending to the lateral prefrontal cortex and medial anterior temporal lobe. Gamma-band activity was increased in both auditory and somatosensory cortex, irrespective of the pathology, and extended to the dorsal anterior cingulate cortex and parahippocampus. Functional and effective connectivity were largely similar in pain and tinnitus, except for a parahippocampal-sensory loop that distinguished pain from tinnitus. In tinnitus, the effective connectivity between parahippocampus and auditory cortex is bidirectional, whereas the effective connectivity between parahippocampus and somatosensory cortex is unidirectional. In pain, the parahippocampal-somatosensory cortex is bidirectional, but parahippocampal auditory cortex unidirectional. These modality-specific loops exhibited theta-gamma nesting. Applying a Bayesian brain model of brain functioning, these findings suggest that the phenomenological difference between auditory and somatosensory phantom percepts result from a vicious circle of belief updating in the context of missing sensory information. This finding may further our understanding of multisensory integration and speaks to a universal treatment for pain and tinnitus-by selectively disrupting parahippocampal-somatosensory and parahippocampal-auditory theta-gamma activity and connectivity.
Collapse
Affiliation(s)
- Dirk De Ridder
- Unit of Neurosurgery, Department of Surgical Sciences, Dunedin School of Medicine, University of Otago, Dunedin 9016, New Zealand
| | - Karl Friston
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3AR, UK
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Sven Vanneste
- Correspondence to: Sven Vanneste Lab for Clinical & Integrative Neuroscience Global Brain Health Institute and Institute of Neuroscience Trinity College Dublin, College Green 2, Dublin D02 PN40, Ireland E-mail:
| |
Collapse
|
5
|
Floegel M, Kasper J, Perrier P, Kell CA. How the conception of control influences our understanding of actions. Nat Rev Neurosci 2023; 24:313-329. [PMID: 36997716 DOI: 10.1038/s41583-023-00691-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/28/2023] [Indexed: 04/01/2023]
Abstract
Wilful movement requires neural control. Commonly, neural computations are thought to generate motor commands that bring the musculoskeletal system - that is, the plant - from its current physical state into a desired physical state. The current state can be estimated from past motor commands and from sensory information. Modelling movement on the basis of this concept of plant control strives to explain behaviour by identifying the computational principles for control signals that can reproduce the observed features of movements. From an alternative perspective, movements emerge in a dynamically coupled agent-environment system from the pursuit of subjective perceptual goals. Modelling movement on the basis of this concept of perceptual control aims to identify the controlled percepts and their coupling rules that can give rise to the observed characteristics of behaviour. In this Perspective, we discuss a broad spectrum of approaches to modelling human motor control and their notions of control signals, internal models, handling of sensory feedback delays and learning. We focus on the influence that the plant control and the perceptual control perspective may have on decisions when modelling empirical data, which may in turn shape our understanding of actions.
Collapse
Affiliation(s)
- Mareike Floegel
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany
| | - Johannes Kasper
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Christian A Kell
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany.
| |
Collapse
|
6
|
Franken MK, Liu BC, Ostry DJ. Towards a somatosensory theory of speech perception. J Neurophysiol 2022; 128:1683-1695. [PMID: 36416451 PMCID: PMC9762980 DOI: 10.1152/jn.00381.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 11/19/2022] [Accepted: 11/19/2022] [Indexed: 11/24/2022] Open
Abstract
Speech perception is known to be a multimodal process, relying not only on auditory input but also on the visual system and possibly on the motor system as well. To date there has been little work on the potential involvement of the somatosensory system in speech perception. In the present review, we identify the somatosensory system as another contributor to speech perception. First, we argue that evidence in favor of a motor contribution to speech perception can just as easily be interpreted as showing somatosensory involvement. Second, physiological and neuroanatomical evidence for auditory-somatosensory interactions across the auditory hierarchy indicates the availability of a neural infrastructure that supports somatosensory involvement in auditory processing in general. Third, there is accumulating evidence for somatosensory involvement in the context of speech specifically. In particular, tactile stimulation modifies speech perception, and speech auditory input elicits activity in somatosensory cortical areas. Moreover, speech sounds can be decoded from activity in somatosensory cortex; lesions to this region affect perception, and vowels can be identified based on somatic input alone. We suggest that the somatosensory involvement in speech perception derives from the somatosensory-auditory pairing that occurs during speech production and learning. By bringing together findings from a set of studies that have not been previously linked, the present article identifies the somatosensory system as a presently unrecognized contributor to speech perception.
Collapse
Affiliation(s)
| | | | - David J Ostry
- McGill University, Montreal, Quebec, Canada
- Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
7
|
Ashokumar M, Guichet C, Schwartz JL, Ito T. Correlation between the effect of orofacial somatosensory inputs in speech perception and speech production performance. AUDITORY PERCEPTION & COGNITION 2022; 6:97-107. [PMID: 37260602 PMCID: PMC10229140 DOI: 10.1080/25742442.2022.2134674] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 09/20/2022] [Indexed: 06/02/2023]
Abstract
Introduction Orofacial somatosensory inputs modify the perception of speech sounds. Such auditory-somatosensory integration likely develops alongside speech production acquisition. We examined whether the somatosensory effect in speech perception varies depending on individual characteristics of speech production. Methods The somatosensory effect in speech perception was assessed by changes in category boundary between /e/ and /ø/ in a vowel identification test resulting from somatosensory stimulation providing facial skin deformation in the rearward direction corresponding to articulatory movement for /e/ applied together with the auditory input. Speech production performance was quantified by the acoustic distances between the average first, second and third formants of /e/ and /ø/ utterances recorded in a separate test. Results The category boundary between /e/ and /ø/ was significantly shifted towards /ø/ due to the somatosensory stimulation which is consistent with previous research. The amplitude of the category boundary shift was significantly correlated with the acoustic distance between the mean second - and marginally third - formants of /e/ and /ø/ productions, with no correlation with the first formant distance. Discussion Greater acoustic distances can be related to larger contrasts between the articulatory targets of vowels in speech production. These results suggest that the somatosensory effect in speech perception can be linked to speech production performance.
Collapse
Affiliation(s)
- Monica Ashokumar
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Clément Guichet
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Jean-Luc Schwartz
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, USA
| |
Collapse
|
8
|
Ito T, Ogane R. Repetitive Exposure to Orofacial Somatosensory Inputs in Speech Perceptual Training Modulates Vowel Categorization in Speech Perception. Front Psychol 2022; 13:839087. [PMID: 35558689 PMCID: PMC9088678 DOI: 10.3389/fpsyg.2022.839087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 03/24/2022] [Indexed: 11/24/2022] Open
Abstract
Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.
Collapse
Affiliation(s)
- Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, CT, United States
| | - Rintaro Ogane
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
9
|
Endo N, Ito T, Watanabe K, Nakazawa K. Enhancement of loudness discrimination acuity for self-generated sound is independent of musical experience. PLoS One 2021; 16:e0260859. [PMID: 34874970 PMCID: PMC8651135 DOI: 10.1371/journal.pone.0260859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 11/17/2021] [Indexed: 11/18/2022] Open
Abstract
Musicians tend to have better auditory and motor performance than non-musicians because of their extensive musical experience. In a previous study, we established that loudness discrimination acuity is enhanced when sound is produced by a precise force generation task. In this study, we compared the enhancement effect between experienced pianists and non-musicians. Without the force generation task, loudness discrimination acuity was better in pianists than non-musicians in the condition. However, the force generation task enhanced loudness discrimination acuity similarly in both pianists and non-musicians. The reaction time was also reduced with the force control task, but only in the non-musician group. The results suggest that the enhancement of loudness discrimination acuity with the precise force generation task is independent of musical experience and is, therefore, a fundamental function in auditory-motor interaction.
Collapse
Affiliation(s)
- Nozomi Endo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Takayuki Ito
- CNRS, Grenoble INP, GIPSA-Lab, Univ. Grenoble Alpes, Grenoble, France
- Haskins Laboratories, New Haven, Connecticut, United States of America
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Faculty of Arts, Design and Architecture, University of New South Wales, Sydney, Australia
| | - Kimitaka Nakazawa
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- * E-mail:
| |
Collapse
|