1
|
Bailey KM, Sami S, Smith FW. Decoding familiar visual object categories in the mu rhythm oscillatory response. Neuropsychologia 2024; 199:108900. [PMID: 38697558 DOI: 10.1016/j.neuropsychologia.2024.108900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 04/22/2024] [Accepted: 04/29/2024] [Indexed: 05/05/2024]
Abstract
Whilst previous research has linked attenuation of the mu rhythm to the observation of specific visual categories, and even to a potential role in action observation via a putative mirror neuron system, much of this work has not considered what specific type of information might be coded in this oscillatory response when triggered via vision. Here, we sought to determine whether the mu rhythm contains content-specific information about the identity of familiar (and also unfamiliar) graspable objects. In the present study, right-handed participants (N = 27) viewed images of both familiar (apple, wine glass) and unfamiliar (cubie, smoothie) graspable objects, whilst performing an orthogonal task at fixation. Multivariate pattern analysis (MVPA) revealed significant decoding of familiar, but not unfamiliar, visual object categories in the mu rhythm response. Thus, simply viewing familiar graspable objects may automatically trigger activation of associated tactile and/or motor properties in sensorimotor areas, reflected in the mu rhythm. In addition, we report significant attenuation in the central beta band for both familiar and unfamiliar visual objects, but not in the mu rhythm. Our findings highlight how analysing two different aspects of the oscillatory response - either attenuation or the representation of information content - provide complementary views on the role of the mu rhythm in response to viewing graspable object categories.
Collapse
Affiliation(s)
| | - Saber Sami
- Norwich Medical School, University of East Anglia, UK
| | | |
Collapse
|
2
|
Hong Y, Ryun S, Chung CK. Evoking artificial speech perception through invasive brain stimulation for brain-computer interfaces: current challenges and future perspectives. Front Neurosci 2024; 18:1428256. [PMID: 38988764 PMCID: PMC11234843 DOI: 10.3389/fnins.2024.1428256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 06/10/2024] [Indexed: 07/12/2024] Open
Abstract
Encoding artificial perceptions through brain stimulation, especially that of higher cognitive functions such as speech perception, is one of the most formidable challenges in brain-computer interfaces (BCI). Brain stimulation has been used for functional mapping in clinical practices for the last 70 years to treat various disorders affecting the nervous system, including epilepsy, Parkinson's disease, essential tremors, and dystonia. Recently, direct electrical stimulation has been used to evoke various forms of perception in humans, ranging from sensorimotor, auditory, and visual to speech cognition. Successfully evoking and fine-tuning artificial perceptions could revolutionize communication for individuals with speech disorders and significantly enhance the capabilities of brain-computer interface technologies. However, despite the extensive literature on encoding various perceptions and the rising popularity of speech BCIs, inducing artificial speech perception is still largely unexplored, and its potential has yet to be determined. In this paper, we examine the various stimulation techniques used to evoke complex percepts and the target brain areas for the input of speech-like information. Finally, we discuss strategies to address the challenges of speech encoding and discuss the prospects of these approaches.
Collapse
Affiliation(s)
- Yirye Hong
- Department of Brain and Cognitive Sciences, College of Natural Sciences, Seoul National University, Seoul, Republic of Korea
| | - Seokyun Ryun
- Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Chun Kee Chung
- Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Republic of Korea
| |
Collapse
|
3
|
Undurraga JA, Luke R, Van Yper L, Monaghan JJM, McAlpine D. The neural representation of an auditory spatial cue in the primate cortex. Curr Biol 2024; 34:2162-2174.e5. [PMID: 38718798 DOI: 10.1016/j.cub.2024.04.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 02/14/2024] [Accepted: 04/12/2024] [Indexed: 05/23/2024]
Abstract
Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.
Collapse
Affiliation(s)
- Jaime A Undurraga
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Interacoustics Research Unit, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Robert Luke
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; The Bionics Institute, 384-388 Albert St., East Melbourne, VIC 3002, Australia
| | - Lindsey Van Yper
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Institute of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark; Research Unit for ORL, Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, 5230 Odense, Denmark
| | - Jessica J M Monaghan
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; National Acoustic Laboratories, Australian Hearing Hub, 16 University Avenue, Sydney, NSW 2109, Australia
| | - David McAlpine
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Macquarie University Hearing and the Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia.
| |
Collapse
|
4
|
Noda T, Aschauer DF, Chambers AR, Seiler JPH, Rumpel S. Representational maps in the brain: concepts, approaches, and applications. Front Cell Neurosci 2024; 18:1366200. [PMID: 38584779 PMCID: PMC10995314 DOI: 10.3389/fncel.2024.1366200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/08/2024] [Indexed: 04/09/2024] Open
Abstract
Neural systems have evolved to process sensory stimuli in a way that allows for efficient and adaptive behavior in a complex environment. Recent technological advances enable us to investigate sensory processing in animal models by simultaneously recording the activity of large populations of neurons with single-cell resolution, yielding high-dimensional datasets. In this review, we discuss concepts and approaches for assessing the population-level representation of sensory stimuli in the form of a representational map. In such a map, not only are the identities of stimuli distinctly represented, but their relational similarity is also mapped onto the space of neuronal activity. We highlight example studies in which the structure of representational maps in the brain are estimated from recordings in humans as well as animals and compare their methodological approaches. Finally, we integrate these aspects and provide an outlook for how the concept of representational maps could be applied to various fields in basic and clinical neuroscience.
Collapse
Affiliation(s)
- Takahiro Noda
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Dominik F. Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Anna R. Chambers
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, United States
| | - Johannes P.-H. Seiler
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| |
Collapse
|
5
|
Foffani G. To be or not to be hallucinating: Implications of hypnagogic/hypnopompic experiences and lucid dreaming for brain disorders. PNAS NEXUS 2024; 3:pgad442. [PMID: 38178978 PMCID: PMC10766414 DOI: 10.1093/pnasnexus/pgad442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 12/06/2023] [Indexed: 01/06/2024]
Abstract
The boundaries between waking and sleeping-when falling asleep (hypnagogic) or waking up (hypnopompic)-can be challenging for our ability to monitor and interpret reality. Without proper understanding, bizarre but relatively normal hypnagogic/hypnopompic experiences can be misinterpreted as psychotic hallucinations (occurring, by definition, in the fully awake state), potentially leading to stigma and misdiagnosis in clinical contexts and to misconception and bias in research contexts. This Perspective proposes that conceptual and practical understanding for differentiating hallucinations from hypnagogic/hypnopompic experiences may be offered by lucid dreaming, the state in which one is aware of dreaming while sleeping. I first introduce a possible systematization of the phenomenological range of hypnagogic/hypnopompic experiences that can occur in the transition from awake to REM dreaming (including hypnagogic perceptions, transition symptoms, sleep paralysis, false awakenings, and out-of-body experiences). I then outline how metacognitive strategies used by lucid dreamers to gain/confirm oneiric lucidity could be tested for better differentiating hypnagogic/hypnopompic experiences from hallucinations. The relevance of hypnagogic/hypnopompic experiences and lucid dreaming is analyzed for schizophrenia and narcolepsy, and discussed for neurodegenerative diseases, particularly Lewy-body disorders (i.e. Parkinson's disease, Parkinson's disease dementia, and dementia with Lewy bodies), offering testable hypotheses for empirical investigation. Finally, emotionally positive lucid dreams triggered or enhanced by training/induction strategies or by a pathological process may have intrinsic therapeutic value if properly recognized and guided. The overall intention is to raise awareness and foster further research about the possible diagnostic, prognostic, and therapeutic implications of hypnagogic/hypnopompic experiences and lucid dreaming for brain disorders.
Collapse
Affiliation(s)
- Guglielmo Foffani
- HM CINAC (Centro Integral de Neurociencias Abarca Campal), Hospital Universitario HM Puerta del Sur, HM Hospitales, Madrid 28938, Spain
- Hospital Nacional de Parapléjicos, Toledo 45004, Spain
- CIBERNED, Instituto de Salud Carlos III, Madrid 28031, Spain
| |
Collapse
|
6
|
Alonso-Valerdi LM, Ibarra-Zárate DI, Torres-Torres AS, Zolezzi DM, Naal-Ruiz NE, Argüello-García J. Comparative analysis of acoustic therapies for tinnitus treatment based on auditory event-related potentials. Front Neurosci 2023; 17:1059096. [PMID: 37081936 PMCID: PMC10111057 DOI: 10.3389/fnins.2023.1059096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 03/06/2023] [Indexed: 04/07/2023] Open
Abstract
IntroductionSo far, Auditory Event-Related Potential (AERP) features have been used to characterize neural activity of patients with tinnitus. However, these EEG patterns could be used to evaluate tinnitus evolution as well. The aim of the present study is to propose a methodology based on AERPs to evaluate the effectiveness of four acoustic therapies for tinnitus treatment.MethodsThe acoustic therapies were: (1) Tinnitus Retraining Therapy (TRT), (2) Auditory Discrimination Therapy (ADT), (3) Therapy for Enriched Acoustic Environment (TEAE), and (4) Binaural Beats Therapy (BBT). In addition, relaxing music was included as a placebo for both: tinnitus sufferers and healthy individuals. To meet this aim, 103 participants were recruited, 53% were females and 47% were males. All the participants were treated for 8 weeks with one of these five sounds, which were moreover tuned in accordance with the acoustic features of their tinnitus (if applied) and hearing loss. They were electroencephalographically monitored before and after their acoustic therapy, and wherefrom AERPs were estimated. The sound effect of acoustic therapies was evaluated by examining the area under the curve of those AERPs. Two parameters were obtained: (1) amplitude and (2) topographical distribution.ResultsThe findings of the investigation showed that after an 8-week treatment, TRT and ADT, respectively achieved significant neurophysiological changes over somatosensory and occipital regions. On one hand, TRT increased the tinnitus perception. On the other hand, ADT redirected the tinnitus attention, what in turn diminished the tinnitus perception. Tinnitus handicapped inventory outcomes verified these neurophysiological findings, revealing that 31% of patients in each group reported that TRT increased tinnitus perception, but ADT diminished it.DiscussionTinnitus has been identified as a multifactorial condition highly associated with hearing loss, age, sex, marital status, education, and even, employment. However, no conclusive evidence has been found yet. In this study, a significant (but low) correlation was found between tinnitus intensity and right ear hearing loss, left ear hearing loss, heart rate, area under the curve of AERPs, and acoustic therapy. This study raises the possibility to assign acoustic therapies by neurophysiological response of patient.
Collapse
Affiliation(s)
- Luz M. Alonso-Valerdi
- Tecnológico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
- *Correspondence: Luz M. Alonso-Valerdi,
| | | | | | - Daniela M. Zolezzi
- Tecnológico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| | | | - Janet Argüello-García
- Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas, Instituto Politécnico Nacional, Mexico City, Mexico
| |
Collapse
|
7
|
Franken MK, Liu BC, Ostry DJ. Towards a somatosensory theory of speech perception. J Neurophysiol 2022; 128:1683-1695. [PMID: 36416451 PMCID: PMC9762980 DOI: 10.1152/jn.00381.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 11/19/2022] [Accepted: 11/19/2022] [Indexed: 11/24/2022] Open
Abstract
Speech perception is known to be a multimodal process, relying not only on auditory input but also on the visual system and possibly on the motor system as well. To date there has been little work on the potential involvement of the somatosensory system in speech perception. In the present review, we identify the somatosensory system as another contributor to speech perception. First, we argue that evidence in favor of a motor contribution to speech perception can just as easily be interpreted as showing somatosensory involvement. Second, physiological and neuroanatomical evidence for auditory-somatosensory interactions across the auditory hierarchy indicates the availability of a neural infrastructure that supports somatosensory involvement in auditory processing in general. Third, there is accumulating evidence for somatosensory involvement in the context of speech specifically. In particular, tactile stimulation modifies speech perception, and speech auditory input elicits activity in somatosensory cortical areas. Moreover, speech sounds can be decoded from activity in somatosensory cortex; lesions to this region affect perception, and vowels can be identified based on somatic input alone. We suggest that the somatosensory involvement in speech perception derives from the somatosensory-auditory pairing that occurs during speech production and learning. By bringing together findings from a set of studies that have not been previously linked, the present article identifies the somatosensory system as a presently unrecognized contributor to speech perception.
Collapse
Affiliation(s)
| | | | - David J Ostry
- McGill University, Montreal, Quebec, Canada
- Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
8
|
Lohse M, Zimmer-Harwood P, Dahmen JC, King AJ. Integration of somatosensory and motor-related information in the auditory system. Front Neurosci 2022; 16:1010211. [PMID: 36330342 PMCID: PMC9622781 DOI: 10.3389/fnins.2022.1010211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 09/28/2022] [Indexed: 11/30/2022] Open
Abstract
An ability to integrate information provided by different sensory modalities is a fundamental feature of neurons in many brain areas. Because visual and auditory inputs often originate from the same external object, which may be located some distance away from the observer, the synthesis of these cues can improve localization accuracy and speed up behavioral responses. By contrast, multisensory interactions occurring close to the body typically involve a combination of tactile stimuli with other sensory modalities. Moreover, most activities involving active touch generate sound, indicating that stimuli in these modalities are frequently experienced together. In this review, we examine the basis for determining sound-source distance and the contribution of auditory inputs to the neural encoding of space around the body. We then consider the perceptual consequences of combining auditory and tactile inputs in humans and discuss recent evidence from animal studies demonstrating how cortical and subcortical areas work together to mediate communication between these senses. This research has shown that somatosensory inputs interface with and modulate sound processing at multiple levels of the auditory pathway, from the cochlear nucleus in the brainstem to the cortex. Circuits involving inputs from the primary somatosensory cortex to the auditory midbrain have been identified that mediate suppressive effects of whisker stimulation on auditory thalamocortical processing, providing a possible basis for prioritizing the processing of tactile cues from nearby objects. Close links also exist between audition and movement, and auditory responses are typically suppressed by locomotion and other actions. These movement-related signals are thought to cancel out self-generated sounds, but they may also affect auditory responses via the associated somatosensory stimulation or as a result of changes in brain state. Together, these studies highlight the importance of considering both multisensory context and movement-related activity in order to understand how the auditory cortex operates during natural behaviors, paving the way for future work to investigate auditory-somatosensory interactions in more ecological situations.
Collapse
|
9
|
Sathian K, Lacey S. Cross-Modal Interactions of the Tactile System. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022; 31:411-418. [PMID: 36408466 PMCID: PMC9674209 DOI: 10.1177/09637214221101877] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
The sensory systems responsible for perceptions of touch, vision, hearing, etc. have traditionally been regarded as mostly separate, only converging at late stages of processing. Contrary to this dogma, recent work has shown that interactions between the senses are robust and abundant. Touch and vision are both commonly used to obtain information about a number of object properties, and share perceptual and neural representations in many domains. Additionally, visuotactile interactions are implicated in the sense of body ownership, as revealed by powerful illusions that can be evoked by manipulating these interactions. Touch and hearing both rely in part on temporal frequency information, leading to a number of audiotactile interactions reflecting a good deal of perceptual and neural overlap. The focus in sensory neuroscience and psychophysics is now on characterizing the multisensory interactions that lead to our panoply of perceptual experiences.
Collapse
Affiliation(s)
- K. Sathian
- Department of Neurology, Penn State Health Milton S. Hershey Medical Center
- Department of Neural & Behavioral Sciences, Penn State College of Medicine
- Department of Psychology, Penn State College of Liberal Arts
| | - Simon Lacey
- Department of Neurology, Penn State Health Milton S. Hershey Medical Center
- Department of Neural & Behavioral Sciences, Penn State College of Medicine
| |
Collapse
|
10
|
Sharma D, Ng KKW, Birznieks I, Vickery RM. Auditory clicks elicit equivalent temporal frequency perception to tactile pulses: A cross-modal psychophysical study. Front Neurosci 2022; 16:1006185. [PMID: 36161171 PMCID: PMC9500524 DOI: 10.3389/fnins.2022.1006185] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 08/24/2022] [Indexed: 12/02/2022] Open
Abstract
Both hearing and touch are sensitive to the frequency of mechanical oscillations—sound waves and tactile vibrations, respectively. The mounting evidence of parallels in temporal frequency processing between the two sensory systems led us to directly address the question of perceptual frequency equivalence between touch and hearing using stimuli of simple and more complex temporal features. In a cross-modal psychophysical paradigm, subjects compared the perceived frequency of pulsatile mechanical vibrations to that elicited by pulsatile acoustic (click) trains, and vice versa. Non-invasive pulsatile stimulation designed to excite a fixed population of afferents was used to induce desired temporal spike trains at frequencies spanning flutter up to vibratory hum (>50 Hz). The cross-modal perceived frequency for regular test pulse trains of either modality was a close match to the presented stimulus physical frequency up to 100 Hz. We then tested whether the recently discovered “burst gap” temporal code for frequency, that is shared by the two senses, renders an equivalent cross-modal frequency perception. When subjects compared trains comprising pairs of pulses (bursts) in one modality against regular trains in the other, the cross-sensory equivalent perceptual frequency best corresponded to the silent interval between the successive bursts in both auditory and tactile test stimuli. These findings suggest that identical acoustic and vibrotactile pulse trains, regardless of pattern, elicit equivalent frequencies, and imply analogous temporal frequency computation strategies in both modalities. This perceptual correspondence raises the possibility of employing a cross-modal comparison as a robust standard to overcome the prevailing methodological limitations in psychophysical investigations and strongly encourages cross-modal approaches for transmitting sensory information such as translating pitch into a similar pattern of vibration on the skin.
Collapse
Affiliation(s)
- Deepak Sharma
- School of Biomedical Sciences, The University of New South Wales (UNSW Sydney), Sydney, NSW, Australia
- Neuroscience Research Australia, Sydney, NSW, Australia
- *Correspondence: Deepak Sharma,
| | - Kevin K. W. Ng
- Center for Social and Affective Neuroscience, Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden
| | - Ingvars Birznieks
- School of Biomedical Sciences, The University of New South Wales (UNSW Sydney), Sydney, NSW, Australia
- Neuroscience Research Australia, Sydney, NSW, Australia
- Bionics and Bio-Robotics, Tyree Foundation Institute of Health Engineering, The University of New South Wales (UNSW Sydney), Sydney, NSW, Australia
| | - Richard M. Vickery
- School of Biomedical Sciences, The University of New South Wales (UNSW Sydney), Sydney, NSW, Australia
- Neuroscience Research Australia, Sydney, NSW, Australia
- Bionics and Bio-Robotics, Tyree Foundation Institute of Health Engineering, The University of New South Wales (UNSW Sydney), Sydney, NSW, Australia
| |
Collapse
|
11
|
Bailey KM, Giordano BL, Kaas AL, Smith FW. Decoding sounds depicting hand-object interactions in primary somatosensory cortex. Cereb Cortex 2022; 33:3621-3635. [PMID: 36045002 DOI: 10.1093/cercor/bhac296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/24/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.
Collapse
Affiliation(s)
- Kerri M Bailey
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Bruno L Giordano
- Institut des Neurosciences de La Timone, CNRS UMR 7289, Université Aix-Marseille, Marseille CNRS UMR 7289, France
| | - Amanda L Kaas
- Department of Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
12
|
Liang N, Liu S, Li X, Wen D, Li Q, Tong Y, Xu Y. A Decrease in Hemodynamic Response in the Right Postcentral Cortex Is Associated With Treatment-Resistant Auditory Verbal Hallucinations in Schizophrenia: An NIRS Study. Front Neurosci 2022; 16:865738. [PMID: 35692414 PMCID: PMC9177139 DOI: 10.3389/fnins.2022.865738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022] Open
Abstract
Background Treatment-resistant auditory verbal hallucinations (TRAVHs) might cause an increased risk of violence, suicide, and hospitalization in patients with schizophrenia (SCZ). Although neuroimaging studies have identified the neural correlation to the symptom of AVH, functional brain activity that correlates particularly in patients with TRAVH remains limited. Functional near-infrared spectroscopy (fNIRS) is a portable and suitable measurement, particularly in exploring brain activation during related tasks. Hence, our researchers aimed to explore the differences in the cerebral hemodynamic function in SCZ-TRAVH, patients with schizophrenia without AVH (SCZ-nAVH), and healthy controls (HCs), to examine neural abnormalities associated more specifically with TRAVH. Methods A 52-channel functional near-infrared spectroscopy system was used to monitor hemodynamic changes in patients with SCZ-TRAVH (n = 38), patients with SCZ-nAVH (n = 35), and HC (n = 30) during a verbal fluency task (VFT). VFT performance, clinical history, and symptom severity were also noted. The original fNIRS data were analyzed using MATLAB to obtain the β values (the brain cortical activity response during the VFT task period); these were used to calculate Δβ (VFT β minus baseline β), which represents the degree of change in oxygenated hemoglobin caused by VFT task. Result Our results showed that there were significant differences in Δβ values among the three groups at 26 channels (ch4, ch13-15, 18, 22, ch25–29, 32, ch35–39, ch43–51, F = 1.70 to 19.10, p < 0.043, FDR-corrected) distributed over the prefrontal–temporal cortical regions. The further pairwise comparisons showed that the Δβ values of 24 channels (ch13–15, 18, 22, 25, ch26–29, ch35–39, ch43–49, ch50–51) were significantly lower in the SCZ group (SCZ-TRAVH and/or SCZ-nAVH) than in the HC group (p < 0.026, FDR-corrected). Additionally, the abnormal activation in the ch22 of right postcentral gyrus was correlated, in turn, with severity of TRAVH. Conclusion Our findings indicate that specific regions of the prefrontal cortex may be associated with TRAVH, which may have implications for early intervention for psychosis.
Collapse
Affiliation(s)
- Nana Liang
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
| | - Sha Liu
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
- Shanxi Key Laboratory of Artificial Intelligence Assisted Diagnosis and Treatment for Mental Disorders, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Xinrong Li
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
- Shanxi Key Laboratory of Artificial Intelligence Assisted Diagnosis and Treatment for Mental Disorders, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Dan Wen
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
| | - Qiqi Li
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
| | - Yujie Tong
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
| | - Yong Xu
- Department of Psychiatry, First Hospital/First Clinical Medical College of Shanxi Medical University, Taiyuan, China
- Shanxi Key Laboratory of Artificial Intelligence Assisted Diagnosis and Treatment for Mental Disorders, First Hospital of Shanxi Medical University, Taiyuan, China
- Department of Mental Health, Shanxi Medical University, Taiyuan, China
- *Correspondence: Yong Xu
| |
Collapse
|
13
|
Jaroszynski C, Job A, Jedynak M, David O, Delon-Martin C. Tinnitus Perception in Light of a Parietal Operculo-Insular Involvement: A Review. Brain Sci 2022; 12:334. [PMID: 35326290 PMCID: PMC8946618 DOI: 10.3390/brainsci12030334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 02/21/2022] [Accepted: 02/24/2022] [Indexed: 12/07/2022] Open
Abstract
In tinnitus literature, researchers have increasingly been advocating for a clearer distinction between tinnitus perception and tinnitus-related distress. In non-bothersome tinnitus, the perception itself can be more specifically investigated: this has provided a body of evidence, based on resting-state and activation fMRI protocols, highlighting the involvement of regions outside the conventional auditory areas, such as the right parietal operculum. Here, we aim to conduct a review of available investigations of the human parietal operculo-insular subregions conducted at the microscopic, mesoscopic, and macroscopic scales arguing in favor of an auditory-somatosensory cross-talk. Both the previous literature and new results on functional connectivity derived from cortico-cortical evoked potentials show that these subregions present a dense tissue of interconnections and a strong connectivity with auditory and somatosensory areas in the healthy brain. Disrupted integration processes between these modalities may thus result in erroneous perceptions, such as tinnitus. More precisely, we highlight the role of a subregion of the right parietal operculum, known as OP3 according to the Jülich atlas, in the integration of auditory and somatosensory representation of the orofacial muscles in the healthy population. We further discuss how a dysfunction of these muscles could induce hyperactivity in the OP3. The evidence of direct electrical stimulation of this area eliciting auditory hallucinations further suggests its involvement in tinnitus perception. Finally, a small number of neuroimaging studies of therapeutic interventions for tinnitus provide additional evidence of right parietal operculum involvement.
Collapse
Affiliation(s)
- Chloé Jaroszynski
- University Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France; (C.J.); (M.J.); (O.D.)
| | - Agnès Job
- Institut de Recherche Biomédicale des Armées, IRBA, 91220 Brétigny-sur-Orge, France;
| | - Maciej Jedynak
- University Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France; (C.J.); (M.J.); (O.D.)
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, 13005 Marseille, France
| | - Olivier David
- University Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France; (C.J.); (M.J.); (O.D.)
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, 13005 Marseille, France
| | - Chantal Delon-Martin
- University Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France; (C.J.); (M.J.); (O.D.)
| |
Collapse
|
14
|
Resting state network connectivity is attenuated by fMRI acoustic noise. Neuroimage 2021; 247:118791. [PMID: 34920084 DOI: 10.1016/j.neuroimage.2021.118791] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/21/2021] [Accepted: 12/07/2021] [Indexed: 12/11/2022] Open
Abstract
INTRODUCTION During the past decades there has been an increasing interest in tracking brain network fluctuations in health and disease by means of resting state functional magnetic resonance imaging (rs-fMRI). Rs-fMRI however does not provide the ideal environmental setting, as participants are continuously exposed to noise generated by MRI coils during acquisition of Echo Planar Imaging (EPI). We investigated the effect of EPI noise on resting state activity and connectivity using magnetoencephalography (MEG), by reproducing the acoustic characteristics of rs-fMRI environment during the recordings. As compared to fMRI, MEG has little sensitivity to brain activity generated in deep brain structures, but has the advantage to capture both the dynamic of cortical magnetic oscillations with high temporal resolution and the slow magnetic fluctuations highly correlated with BOLD signal. METHODS Thirty healthy subjects were enrolled in a counterbalanced design study including three conditions: a) silent resting state (Silence), b) resting state upon EPI noise (fMRI), and c) resting state upon white noise (White). White noise was employed to test the specificity of fMRI noise effect. The amplitude envelope correlation (AEC) in alpha band measured the connectivity of seven Resting State Networks (RSN) of interest (default mode network, dorsal attention network, language, left and right auditory and left and right sensory-motor). Vigilance dynamic was estimated from power spectral activity. RESULTS fMRI and White acoustic noise consistently reduced connectivity of cortical networks. The effects were widespread, but noise and network specificities were also present. For fMRI noise, decreased connectivity was found in the right auditory and sensory-motor networks. Progressive increase of slow theta-delta activity related to drowsiness was found in all conditions, but was significantly higher for fMRI . Theta-delta significantly and positively correlated with variations of cortical connectivity. DISCUSSION rs-fMRI connectivity is biased by unavoidable environmental factors during scanning, which warrant more careful control and improved experimental designs. MEG is free from acoustic noise and allows a sensitive estimation of resting state connectivity in cortical areas. Although underutilized, MEG could overcome issues related to noise during fMRI, in particular when investigation of motor and auditory networks is needed.
Collapse
|
15
|
Rezaul Karim AKM, Proulx MJ, de Sousa AA, Likova LT. Neuroplasticity and Crossmodal Connectivity in the Normal, Healthy Brain. PSYCHOLOGY & NEUROSCIENCE 2021; 14:298-334. [PMID: 36937077 PMCID: PMC10019101 DOI: 10.1037/pne0000258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objective Neuroplasticity enables the brain to establish new crossmodal connections or reorganize old connections which are essential to perceiving a multisensorial world. The intent of this review is to identify and summarize the current developments in neuroplasticity and crossmodal connectivity, and deepen understanding of how crossmodal connectivity develops in the normal, healthy brain, highlighting novel perspectives about the principles that guide this connectivity. Methods To the above end, a narrative review is carried out. The data documented in prior relevant studies in neuroscience, psychology and other related fields available in a wide range of prominent electronic databases are critically assessed, synthesized, interpreted with qualitative rather than quantitative elements, and linked together to form new propositions and hypotheses about neuroplasticity and crossmodal connectivity. Results Three major themes are identified. First, it appears that neuroplasticity operates by following eight fundamental principles and crossmodal integration operates by following three principles. Second, two different forms of crossmodal connectivity, namely direct crossmodal connectivity and indirect crossmodal connectivity, are suggested to operate in both unisensory and multisensory perception. Third, three principles possibly guide the development of crossmodal connectivity into adulthood. These are labeled as the principle of innate crossmodality, the principle of evolution-driven 'neuromodular' reorganization and the principle of multimodal experience. These principles are combined to develop a three-factor interaction model of crossmodal connectivity. Conclusions The hypothesized principles and the proposed model together advance understanding of neuroplasticity, the nature of crossmodal connectivity, and how such connectivity develops in the normal, healthy brain.
Collapse
|
16
|
Sakai H, Ueda S, Ueno K, Kumada T. Neuroplastic Reorganization Induced by Sensory Augmentation for Self-Localization During Locomotion. FRONTIERS IN NEUROERGONOMICS 2021; 2:691993. [PMID: 38235242 PMCID: PMC10790880 DOI: 10.3389/fnrgo.2021.691993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 07/21/2021] [Indexed: 01/19/2024]
Abstract
Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.
Collapse
Affiliation(s)
- Hiroyuki Sakai
- Human Science Laboratory, Toyota Central R&D Laboratories, Inc., Tokyo, Japan
| | - Sayako Ueda
- TOYOTA Collaboration Center, RIKEN Center for Brain Science, Wako, Japan
| | - Kenichi Ueno
- Support Unit for Functional Magnetic Resonance Imaging, RIKEN Center for Brain Science, Wako, Japan
| | | |
Collapse
|
17
|
Abstract
Frisson is characterised by tingling and tickling sensations with positive or negative feelings. However, it is still unknown what factors affect the intensity of frisson. We conducted experiments on the stimulus characteristics and individual’s mood states and personality traits. Participants filled out self-reported questionnaires, including the Profile of Mood States, Beck Depression Inventory, and Big Five Inventory. They continuously indicated the subjective intensity of frisson throughout a 17-min experiment while listening to binaural brushing and tapping sounds through headphones. In the interviews after the experiments, participants reported that tingling and tickling sensations mainly originated on their ears, neck, shoulders, and back. Cross-correlation results showed that the intensity of frisson was closely linked to the acoustic features of auditory stimuli, including their amplitude, spectral centroid, and spectral bandwidth. This suggests that proximal sounds with dark and compact timbre trigger frisson. The peak of correlation between frisson and the acoustic feature was observed 2 s after the acoustic feature changed, suggesting that bottom-up auditory inputs modulate skin-related modalities. We also found that participants with anxiety were sensitive to frisson. Our results provide important clues to understanding the mechanisms of auditory–somatosensory interactions.
Collapse
Affiliation(s)
- Takuya Koumura
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Masashi Nakatani
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan.,Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Hsin-I Liao
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Hirohito M Kondo
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan.,School of Psychology, Chukyo University, Nagoya, Japan
| |
Collapse
|
18
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
19
|
Wang L, Li C, Chen D, Lv X, Go R, Wu J, Yan T. Hemodynamic response varies across tactile stimuli with different temporal structures. Hum Brain Mapp 2020; 42:587-597. [PMID: 33169898 PMCID: PMC7814760 DOI: 10.1002/hbm.25243] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 10/01/2020] [Accepted: 10/03/2020] [Indexed: 11/23/2022] Open
Abstract
Tactile stimuli can be distinguished based on their temporal features (e.g., duration, local frequency, and number of pulses), which are fundamental for vibrotactile frequency perception. Characterizing how the hemodynamic response changes in shape across experimental conditions is important for designing and interpreting fMRI studies on tactile information processing. In this study, we focused on periodic tactile stimuli with different temporal structures and explored the hemodynamic response function (HRF) induced by these stimuli. We found that HRFs were stimulus‐dependent in tactile‐related brain areas. Continuous stimuli induced a greater area of activation and a stronger and narrower hemodynamic response than intermittent stimuli with the same duration. The magnitude of the HRF increased with increasing stimulus duration. By normalizing the characteristics into topographic matrix, nonlinearity was obvious. These results suggested that stimulation patterns and duration within a cycle may be key characters for distinguishing different stimuli. We conclude that different temporal structures of tactile stimuli induced different HRFs, which are essential for vibrotactile perception and should be considered in fMRI experimental designs and analyses.
Collapse
Affiliation(s)
- Luyao Wang
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China.,Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Duanduan Chen
- School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Xiaoyu Lv
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China.,Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
| | - Ritsu Go
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China.,Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
| | - Jinglong Wu
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China.,Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China.,Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Tianyi Yan
- School of Life Science, Beijing Institute of Technology, Beijing, China.,Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
20
|
Scurry AN, Huber E, Matera C, Jiang F. Increased Right Posterior STS Recruitment Without Enhanced Directional-Tuning During Tactile Motion Processing in Early Deaf Individuals. Front Neurosci 2020; 14:864. [PMID: 32982667 PMCID: PMC7477335 DOI: 10.3389/fnins.2020.00864] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/24/2020] [Indexed: 01/19/2023] Open
Abstract
Upon early sensory deprivation, the remaining modalities often exhibit cross-modal reorganization, such as primary auditory cortex (PAC) recruitment for visual motion processing in early deafness (ED). Previous studies of compensatory plasticity in ED individuals have given less attention to tactile motion processing. In the current study, we aimed to examine the effects of early auditory deprivation on tactile motion processing. We simulated four directions of tactile motion on each participant's right index finger and characterized their tactile motion responses and directional-tuning profiles using population receptive field analysis. Similar tactile motion responses were found within primary (SI) and secondary (SII) somatosensory cortices between ED and hearing control groups, whereas ED individuals showed a reduced proportion of voxels with directionally tuned responses in SI contralateral to stimulation. There were also significant but minimal responses to tactile motion within PAC for both groups. While early deaf individuals show significantly larger recruitment of right posterior superior temporal sulcus (pSTS) region upon tactile motion stimulation, there was no evidence of enhanced directional tuning. Greater recruitment of right pSTS region is consistent with prior studies reporting reorganization of multimodal areas due to sensory deprivation. The absence of increased directional tuning within the right pSTS region may suggest a more distributed population of neurons dedicated to processing tactile spatial information as a consequence of early auditory deprivation.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Elizabeth Huber
- Department of Speech and Hearing Sciences, Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
| | - Courtney Matera
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
21
|
Spence C. Shitsukan - the Multisensory Perception of Quality. Multisens Res 2020; 33:737-775. [PMID: 32143187 DOI: 10.1163/22134808-bja10003] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 01/29/2020] [Indexed: 11/19/2022]
Abstract
We often estimate, or perceive, the quality of materials, surfaces, and objects, what the Japanese refer to as 'shitsukan', by means of several of our senses. The majority of the literature on shitsukan perception has, though, tended to focus on the unimodal visual evaluation of stimulus properties. In part, this presumably reflects the widespread hegemony of the visual in the modern era and, in part, is a result of the growing interest, not to mention the impressive advances, in digital rendering amongst the computer graphics community. Nevertheless, regardless of such an oculocentric bias in so much of the empirical literature, it is important to note that several other senses often do contribute to the impression of the material quality of surfaces, materials, and objects as experienced in the real world, rather than just in virtual reality. Understanding the multisensory contributions to the perception of material quality, especially when combined with computational and neural data, is likely to have implications for a number of fields of basic research as well as being applicable to emerging domains such as, for example, multisensory augmented retail, not to mention multisensory packaging design.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, Anna Watts Building, University of Oxford, Oxford, OX2 6GG, UK
| |
Collapse
|
22
|
Rahman MS, Barnes KA, Crommett LE, Tommerdahl M, Yau JM. Auditory and tactile frequency representations are co-embedded in modality-defined cortical sensory systems. Neuroimage 2020; 215:116837. [PMID: 32289461 PMCID: PMC7292761 DOI: 10.1016/j.neuroimage.2020.116837] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 03/17/2020] [Accepted: 04/06/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory information is represented and elaborated in hierarchical cortical systems that are thought to be dedicated to individual sensory modalities. This traditional view of sensory cortex organization has been challenged by recent evidence of multimodal responses in primary and association sensory areas. Although it is indisputable that sensory areas respond to multiple modalities, it remains unclear whether these multimodal responses reflect selective information processing for particular stimulus features. Here, we used fMRI adaptation to identify brain regions that are sensitive to the temporal frequency information contained in auditory, tactile, and audiotactile stimulus sequences. A number of brain regions distributed over the parietal and temporal lobes exhibited frequency-selective temporal response modulation for both auditory and tactile stimulus events, as indexed by repetition suppression effects. A smaller set of regions responded to crossmodal adaptation sequences in a frequency-dependent manner. Despite an extensive overlap of multimodal frequency-selective responses across the parietal and temporal lobes, representational similarity analysis revealed a cortical "regional landscape" that clearly reflected distinct somatosensory and auditory processing systems that converged on modality-invariant areas. These structured relationships between brain regions were also evident in spontaneous signal fluctuation patterns measured at rest. Our results reveal that multimodal processing in human cortex can be feature-specific and that multimodal frequency representations are embedded in the intrinsically hierarchical organization of cortical sensory systems.
Collapse
Affiliation(s)
- Md Shoaibur Rahman
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA
| | - Kelly Anne Barnes
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA; Department of Behavioral and Social Sciences, San Jacinto College - South, Houston, 13735 Beamer Rd, S13.269, Houston, TX, 77089, USA
| | - Lexi E Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA
| | - Mark Tommerdahl
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, CB No. 7575, Chapel Hill, NC, 27599, USA
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA.
| |
Collapse
|
23
|
Villalonga MB, Sussman RF, Sekuler R. Feeling the Beat (and Seeing It, Too): Vibrotactile, Visual, and Bimodal Rate Discrimination. Multisens Res 2020; 33:31-59. [PMID: 31648198 DOI: 10.1163/22134808-20191413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 07/09/2019] [Indexed: 11/19/2022]
Abstract
Beats are among the basic units of perceptual experience. Produced by regular, intermittent stimulation, beats are most commonly associated with audition, but the experience of a beat can result from stimulation in other modalities as well. We studied the robustness of visual, vibrotactile, and bimodal signals as sources of beat perception. Subjects attempted to discriminate between pulse trains delivered at 3 Hz or at 6 Hz. To investigate signal robustness, we intentionally degraded signals on two-thirds of the trials using temporal-domain noise. On these trials, inter-pulse intervals (IPIs) were stochastic, perturbed independently from the nominal IPI by random samples from zero-mean Gaussian distributions with different variances. These perturbations produced directional changes in the IPIs, which either increased or decreased the likelihood of confusing the two pulse rates. In addition to affording an assay of signal robustness, this paradigm made it possible to gauge how subjects' judgments were influenced by successive IPIs. Logistic regression revealed a strong primacy effect: subjects' decisions were disproportionately influenced by a trial's initial IPIs. Response times and parameter estimates from drift-diffusion modeling showed that information accumulates more rapidly with bimodal stimulation than with either unimodal stimulus alone. Analysis of error rates within each condition suggested consistently optimal decision making, even with increased IPI variability. Finally, beat information delivered by vibrotactile signals proved just as robust as information conveyed by visual signals, confirming vibrotactile stimulation's potential as a communication channel.
Collapse
Affiliation(s)
| | - Rachel F Sussman
- 2Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Robert Sekuler
- 2Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
24
|
Mantel T, Dresel C, Welte M, Meindl T, Jochim A, Zimmer C, Haslinger B. Altered sensory system activity and connectivity patterns in adductor spasmodic dysphonia. Sci Rep 2020; 10:10179. [PMID: 32576918 PMCID: PMC7311401 DOI: 10.1038/s41598-020-67295-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 05/26/2020] [Indexed: 12/19/2022] Open
Abstract
Adductor-type spasmodic dysphonia (ADSD) manifests in effortful speech temporarily relievable by botulinum neurotoxin type A (BoNT-A). Previously, abnormal structure, phonation-related and resting-state sensorimotor abnormalities as well as peripheral tactile thresholds in ADSD were described. This study aimed at assessing abnormal central tactile processing patterns, their spatial relation with dysfunctional resting-state connectivity, and their BoNT-A responsiveness. Functional MRI in 14/12 ADSD patients before/under BoNT-A effect and 15 controls was performed (i) during automatized tactile stimulus application to face/hand, and (ii) at rest. Between-group differential stimulation-induced activation and resting-state connectivity (regional homogeneity, connectivity strength within selected sensory(motor) networks), as well as within-patient BoNT-A effects on these differences were investigated. Contralateral-to-stimulation overactivity in ADSD before BoNT-A involved primary and secondary somatosensory representations, along with abnormalities in higher-order parietal, insular, temporal or premotor cortices. Dysphonic impairment in ADSD positively associated with left-hemispheric temporal activity. Connectivity was increased within right premotor (sensorimotor network), left primary auditory cortex (auditory network), and regionally reduced at the temporoparietal junction. Activation/connectivity before/after BoNT-A within-patients did not significantly differ. Abnormal ADSD central somatosensory processing supports its significance as common pathophysiologic focal dystonia trait. Abnormal temporal cortex tactile processing and resting-state connectivity might hint at abnormal cross-modal sensory interactions.
Collapse
Affiliation(s)
- Tobias Mantel
- Department of Neurology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany
| | - Christian Dresel
- Department of Neurology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany.,Department of Neurology, Johannes Gutenberg University, Langenbeckstrasse, 1, Mainz, Germany
| | - Michael Welte
- Department of Neurology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany
| | - Tobias Meindl
- Department of Neurology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany
| | - Angela Jochim
- Department of Neurology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany
| | - Claus Zimmer
- Department of Neuroradiology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany
| | - Bernhard Haslinger
- Department of Neurology, Klinikum rechts der Isar, Technische Universität München, Ismaningerstrasse, 22, Munich, Germany.
| |
Collapse
|
25
|
Joo SW, Yoon W, Jo YT, Kim H, Kim Y, Lee J. Aberrant Executive Control and Auditory Networks in Recent-Onset Schizophrenia. Neuropsychiatr Dis Treat 2020; 16:1561-1570. [PMID: 32606708 PMCID: PMC7319504 DOI: 10.2147/ndt.s254208] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 05/27/2020] [Indexed: 12/21/2022] Open
Abstract
PURPOSE Despite a large number of resting-state functional MRI (rsfMRI) studies in schizophrenia, current evidence on the abnormalities of functional connectivity (FC) of resting-state networks shows high variability, and the findings on recent-onset schizophrenia are insufficient compared to those on chronic schizophrenia. PATIENTS AND METHODS We performed a rsfMRI in 46 patients with recent-onset schizophrenia and 22 healthy controls. Group independent component brainmap and dual regression were performed for voxel-wise comparisons between the groups. Correlation of the symptom severity, cognitive function, duration of illness, and a total antipsychotics dose with FC was evaluated with Spearman's rho correlation. RESULTS The patient group had areas with a significantly decreased FC compared to that of the control group in which it existed in the left supplementary motor cortex and supramarginal gyrus (the executive control network) and the right postcentral gyrus (the auditory network). The patient group had a significant correlation of the total antipsychotics dose with the FC of the cluster in the left supplementary motor cortex in the executive control network. CONCLUSION Patients with recent-onset schizophrenia have decreased FC of the executive control and auditory networks compared to healthy controls.
Collapse
Affiliation(s)
- Sung Woo Joo
- Medical Corps, Republic of Korea Navy 1st Fleet, Donghae, Republic of Korea
| | - Woon Yoon
- Department of Psychiatry, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Young Tak Jo
- Department of Psychiatry, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Harin Kim
- Department of Psychiatry, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yangsik Kim
- Department of Psychiatry, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jungsun Lee
- Department of Psychiatry, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
26
|
Ohashi H, Ito T. Recalibration of auditory perception of speech due to orofacial somatosensory inputs during speech motor adaptation. J Neurophysiol 2019; 122:2076-2084. [PMID: 31509469 DOI: 10.1152/jn.00028.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Speech motor control and learning rely on both somatosensory and auditory inputs. Somatosensory inputs associated with speech production can also affect the process of auditory perception of speech, and the somatosensory-auditory interaction may play a fundamental role in auditory perception of speech. In this report, we show that the somatosensory system contributes to perceptual recalibration, separate from its role in motor function. Subjects participated in speech motor adaptation to altered auditory feedback. Auditory perception of speech was assessed in phonemic identification tests before and after speech adaptation. To investigate a role of the somatosensory system in motor adaptation and subsequent perceptual change, we applied orofacial skin stretch in either a backward or forward direction during the auditory feedback alteration as a somatosensory modulation. We found that the somatosensory modulation did not affect the amount of adaptation at the end of training, although it changed the rate of adaptation. However, the perception following speech adaptation was altered depending on the direction of the somatosensory modulation. Somatosensory inflow rather than motor outflow thus drives changes to auditory perception of speech following speech adaptation, suggesting that somatosensory inputs play an important role in tuning of perceptual system.NEW & NOTEWORTHY This article reports that the somatosensory system works not equally with the motor system, but predominantly in the calibration of auditory perception of speech by speech production.
Collapse
Affiliation(s)
- Hiroki Ohashi
- Department of Psychology, McGill University, Montreal, Quebec, Canada.,Haskins Laboratories, New Haven, Connecticut
| | - Takayuki Ito
- Haskins Laboratories, New Haven, Connecticut.,Centre National de la Recherche Scientifique, GIPSA-Lab, Grenoble Institute of Technology, University of Grenoble-Alpes, Saint Martin d'Heres, France
| |
Collapse
|
27
|
EPI distortion correction for concurrent human brain stimulation and imaging at 3T. J Neurosci Methods 2019; 327:108400. [PMID: 31434000 DOI: 10.1016/j.jneumeth.2019.108400] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 08/15/2019] [Accepted: 08/17/2019] [Indexed: 01/21/2023]
Abstract
BACKGROUND Transcranial magnetic stimulation (TMS) can be paired with functional magnetic resonance imaging (fMRI) in concurrent TMS-fMRI experiments. These multimodal experiments enable causal probing of network architecture in the human brain which can complement alternative network mapping approaches. Critically, merely introducing the TMS coil into the scanner environment can sometimes produce substantial magnetic field inhomogeneities and spatial distortions which limit the utility of concurrent TMS-fMRI. METHOD AND RESULTS We assessed the efficacy of point spread function corrected echo planar imaging (PSF-EPI) in correcting for the field inhomogeneities associated with a TMS coil at 3 T. In phantom and brain scans, we quantitatively compared the coil-induced distortion artifacts measured in EPI scans with and without PSF correction. We found that the application of PSF corrections to the EPI data significantly improved signal-to-noise and reduced distortions. In phantom scans with the PSF-EPI sequence, we also characterized the temporal profile of dynamic artifacts associated with TMS delivery and found that image quality remained high as long as the TMS pulse preceded the RF excitation pulses by at least 50 ms. Lastly, we validated the PSF-EPI sequence in human brain scans involving TMS and motor behavior as well as resting state fMRI scans. CONCLUSIONS Our collective results demonstrate the potential benefits of PSF-EPI for concurrent TMS-fMRI when coil-related artifacts are a concern. The ability to collect high quality resting state fMRI data in the same session as the concurrent TMS-fMRI experiment offers a unique opportunity to interrogate network architecture in the human brain.
Collapse
|
28
|
Crommett LE, Madala D, Yau JM. Multisensory perceptual interactions between higher-order temporal frequency signals. J Exp Psychol Gen 2019; 148:1124-1137. [PMID: 30335446 PMCID: PMC6472995 DOI: 10.1037/xge0000513] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Naturally occurring signals in audition and touch can be complex and marked by temporal variations in frequency and amplitude. Auditory frequency sweep processing has been studied extensively; however, much less is known about sweep processing in touch because studies have primarily focused on the perception of simple sinusoidal vibrations. Given the extensive interactions between audition and touch in the frequency processing of pure tone signals, we reasoned that these senses might also interact in the processing of higher-order frequency representations like sweeps. In a series of psychophysical experiments, we characterized the influence of auditory distractors on the ability of participants to discriminate tactile frequency sweeps. Auditory frequency sweeps systematically biased the tactile perception of sweep direction. Importantly, auditory cues exerted little influence on tactile sweep direction perception when the sounds and vibrations occupied different absolute frequency ranges or when the sounds consisted of intensity sweeps. Thus, audition and touch interact in frequency sweep perception in a frequency- and feature-specific manner. Our results demonstrate that audio-tactile interactions are not constrained to the processing of simple sinusoids. Because higher-order frequency representations may be synthesized from simpler representations, our findings imply that multisensory interactions in the temporal frequency domain span multiple hierarchical levels in sensory processing. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Lexi E. Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| | | | - Jeffrey M. Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
29
|
Convento S, Wegner-Clemens KA, Yau JM. Reciprocal Interactions Between Audition and Touch in Flutter Frequency Perception. Multisens Res 2019; 32:67-85. [PMID: 31059492 DOI: 10.1163/22134808-20181334] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Accepted: 11/09/2018] [Indexed: 11/19/2022]
Abstract
In both audition and touch, sensory cues comprising repeating events are perceived either as a continuous signal or as a stream of temporally discrete events (flutter), depending on the events' repetition rate. At high repetition rates (>100 Hz), auditory and tactile cues interact reciprocally in pitch processing. The frequency of a cue experienced in one modality systematically biases the perceived frequency of a cue experienced in the other modality. Here, we tested whether audition and touch also interact in the processing of low-frequency stimulation. We also tested whether multisensory interactions occurred if the stimulation in one modality comprised click trains and the stimulation in the other modality comprised amplitude-modulated signals. We found that auditory cues bias touch and tactile cues bias audition on a flutter discrimination task. Even though participants were instructed to attend to a single sensory modality and ignore the other cue, the flutter rate in the attended modality is perceived to be similar to that of the distractor modality. Moreover, we observed similar interaction patterns regardless of stimulus type and whether the same stimulus types were experienced by both senses. Combined with earlier studies, our results suggest that the nervous system extracts and combines temporal rate information from multisensory environmental signals, regardless of stimulus type, in both the low- and high temporal frequency domains. This function likely reflects the importance of temporal frequency as a fundamental feature of our multisensory experience.
Collapse
Affiliation(s)
- Silvia Convento
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Kira A Wegner-Clemens
- 2Department of Neurosurgery, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Jeffrey M Yau
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| |
Collapse
|
30
|
Cieśla K, Wolak T, Lorens A, Heimler B, Skarżyński H, Amedi A. Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor Neurol Neurosci 2019; 37:155-166. [PMID: 31006700 PMCID: PMC6598101 DOI: 10.3233/rnn-190898] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
BACKGROUND Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker. OBJECTIVES In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. METHODS To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. RESULTS We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. CONCLUSIONS These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.
Collapse
Affiliation(s)
- Katarzyna Cieśla
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Artur Lorens
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Benedetta Heimler
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Henryk Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
31
|
Effect of acceleration of auditory inputs on the primary somatosensory cortex in humans. Sci Rep 2018; 8:12883. [PMID: 30150686 PMCID: PMC6110726 DOI: 10.1038/s41598-018-31319-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interaction occurs during the early stages of processing in the sensory cortex; however, its effect on neuronal activity speed remains unclear. We used magnetoencephalography to investigate whether auditory stimulation influences the initial cortical activity in the primary somatosensory cortex. A 25-ms pure tone was randomly presented to the left or right side of healthy volunteers at 1000 ms when electrical pulses were applied to the left or right median nerve at 20 Hz for 1500 ms because we did not observe any cross-modal effect elicited by a single pulse. The latency of N20 m originating from Brodmann's area 3b was measured for each pulse. The auditory stimulation significantly shortened the N20 m latency at 1050 and 1100 ms. This reduction in N20 m latency was identical for the ipsilateral and contralateral sounds for both latency points. Therefore, somatosensory-auditory interaction, such as input to the area 3b from the thalamus, occurred during the early stages of synaptic transmission. Auditory information that converged on the somatosensory system was considered to have arisen from the early stages of the feedforward pathway. Acceleration of information processing through the cross-modal interaction seemed to be partly due to faster processing in the sensory cortex.
Collapse
|
32
|
Kato M, Yokoyama C, Kawasaki A, Takeda C, Koike T, Onoe H, Iriki A. Individual identity and affective valence in marmoset calls: in vivo brain imaging with vocal sound playback. Anim Cogn 2018; 21:331-343. [PMID: 29488110 PMCID: PMC5908821 DOI: 10.1007/s10071-018-1169-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Revised: 02/12/2018] [Accepted: 02/15/2018] [Indexed: 12/29/2022]
Abstract
As with humans, vocal communication is an important social tool for nonhuman primates. Common marmosets (Callithrix jacchus) often produce whistle-like 'phee' calls when they are visually separated from conspecifics. The neural processes specific to phee call perception, however, are largely unknown, despite the possibility that these processes involve social information. Here, we examined behavioral and whole-brain mapping evidence regarding the detection of individual conspecific phee calls using an audio playback procedure. Phee calls evoked sound exploratory responses when the caller changed, indicating that marmosets can discriminate between caller identities. Positron emission tomography with [18F] fluorodeoxyglucose revealed that perception of phee calls from a single subject was associated with activity in the dorsolateral prefrontal, medial prefrontal, orbitofrontal cortices, and the amygdala. These findings suggest that these regions are implicated in cognitive and affective processing of salient social information. However, phee calls from multiple subjects induced brain activation in only some of these regions, such as the dorsolateral prefrontal cortex. We also found distinctive brain deactivation and functional connectivity associated with phee call perception depending on the caller change. According to changes in pupillary size, phee calls from a single subject induced a higher arousal level compared with those from multiple subjects. These results suggest that marmoset phee calls convey information about individual identity and affective valence depending on the consistency or variability of the caller. Based on the flexible perception of the call based on individual recognition, humans and marmosets may share some neural mechanisms underlying conspecific vocal perception.
Collapse
Affiliation(s)
- Masaki Kato
- Laboratory for Symbolic Cognitive Development, RIKEN Brain Science Institute, Wako, Saitama, Japan
- Research Development Section, Research Promotion Hub, Office for Enhancing Institutional Capacity, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Chihiro Yokoyama
- Division of Bio-Function Dynamics Imaging, RIKEN Center for Life Science Technologies, Kobe, Hyogo, Japan.
| | - Akihiro Kawasaki
- Division of Bio-Function Dynamics Imaging, RIKEN Center for Life Science Technologies, Kobe, Hyogo, Japan
| | - Chiho Takeda
- Division of Bio-Function Dynamics Imaging, RIKEN Center for Life Science Technologies, Kobe, Hyogo, Japan
| | - Taku Koike
- Laboratory for Symbolic Cognitive Development, RIKEN Brain Science Institute, Wako, Saitama, Japan
| | - Hirotaka Onoe
- Division of Bio-Function Dynamics Imaging, RIKEN Center for Life Science Technologies, Kobe, Hyogo, Japan
| | - Atsushi Iriki
- Laboratory for Symbolic Cognitive Development, RIKEN Brain Science Institute, Wako, Saitama, Japan.
- RIKEN-NTU Research Centre for Human Biology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
| |
Collapse
|
33
|
Convento S, Rahman MS, Yau JM. Selective Attention Gates the Interactive Crossmodal Coupling between Perceptual Systems. Curr Biol 2018; 28:746-752.e5. [PMID: 29456139 DOI: 10.1016/j.cub.2018.01.021] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 12/11/2017] [Accepted: 01/09/2018] [Indexed: 10/18/2022]
Abstract
Sensory cortical systems often activate in parallel, even when stimulation is experienced through a single sensory modality [1-3]. Co-activations may reflect the interactive coupling between information-linked cortical systems or merely parallel but independent sensory processing. We report causal evidence consistent with the hypothesis that human somatosensory cortex (S1), which co-activates with auditory cortex during the processing of vibrations and textures [4-9], interactively couples to cortical systems that support auditory perception. In a series of behavioral experiments, we used transcranial magnetic stimulation (TMS) to probe interactions between the somatosensory and auditory perceptual systems as we manipulated attention state. Acute TMS over S1 impairs auditory frequency perception when subjects simultaneously attend to auditory and tactile frequency, but not when attention is directed to audition alone. Auditory frequency perception is unaffected by TMS over visual cortex, thus confirming the privileged interactions between the somatosensory and auditory systems in temporal frequency processing [10-13]. Our results provide a key demonstration that selective attention can modulate the functional properties of cortical systems thought to support specific sensory modalities. The gating of crossmodal coupling by selective attention may critically support multisensory interactions and feature-specific perception.
Collapse
Affiliation(s)
- Silvia Convento
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Md Shoaibur Rahman
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA.
| |
Collapse
|