1
|
Hakonen M, Dahmani L, Lankinen K, Ren J, Barbaro J, Blazejewska A, Cui W, Kotlarz P, Li M, Polimeni JR, Turpin T, Uluç I, Wang D, Liu H, Ahveninen J. Individual connectivity-based parcellations reflect functional properties of human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576475. [PMID: 38293021 PMCID: PMC10827228 DOI: 10.1101/2024.01.20.576475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neuroimaging studies of the functional organization of human auditory cortex have focused on group-level analyses to identify tendencies that represent the typical brain. Here, we mapped auditory areas of the human superior temporal cortex (STC) in 30 participants by combining functional network analysis and 1-mm isotropic resolution 7T functional magnetic resonance imaging (fMRI). Two resting-state fMRI sessions, and one or two auditory and audiovisual speech localizer sessions, were collected on 3-4 separate days. We generated a set of functional network-based parcellations from these data. Solutions with 4, 6, and 11 networks were selected for closer examination based on local maxima of Dice and Silhouette values. The resulting parcellation of auditory cortices showed high intraindividual reproducibility both between resting state sessions (Dice coefficient: 69-78%) and between resting state and task sessions (Dice coefficient: 62-73%). This demonstrates that auditory areas in STC can be reliably segmented into functional subareas. The interindividual variability was significantly larger than intraindividual variability (Dice coefficient: 57%-68%, p<0.001), indicating that the parcellations also captured meaningful interindividual variability. The individual-specific parcellations yielded the highest alignment with task response topographies, suggesting that individual variability in parcellations reflects individual variability in auditory function. Connectional homogeneity within networks was also highest for the individual-specific parcellations. Furthermore, the similarity in the functional parcellations was not explainable by the similarity of macroanatomical properties of auditory cortex. Our findings suggest that individual-level parcellations capture meaningful idiosyncrasies in auditory cortex organization.
Collapse
Affiliation(s)
- M Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - L Dahmani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - K Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - J Ren
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J Barbaro
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - A Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - W Cui
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - P Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - M Li
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - I Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - D Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - H Liu
- Division of Brain Sciences, Changping Laboratory, Beijing, China
- Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing, China
| | - J Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
Guo G, Wang N, Sun C, Geng H. Embodied Cross-Modal Interactions Based on an Altercentric Reference Frame. Brain Sci 2024; 14:314. [PMID: 38671966 PMCID: PMC11048532 DOI: 10.3390/brainsci14040314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 03/20/2024] [Accepted: 03/22/2024] [Indexed: 04/28/2024] Open
Abstract
Accurate comprehension of others' thoughts and intentions is crucial for smooth social interactions, wherein understanding their perceptual experiences serves as a fundamental basis for this high-level social cognition. However, previous research has predominantly focused on the visual modality when investigating perceptual processing from others' perspectives, leaving the exploration of multisensory inputs during this process largely unexplored. By incorporating auditory stimuli into visual perspective-taking (VPT) tasks, we have designed a novel experimental paradigm in which the spatial correspondence between visual and auditory stimuli was limited to the altercentric rather than the egocentric reference frame. Overall, we found that when individuals engaged in explicit or implicit VPT to process visual stimuli from an avatar's viewpoint, the concomitantly presented auditory stimuli were also processed within this avatar-centered reference frame, revealing altercentric cross-modal interactions.
Collapse
Affiliation(s)
- Guanchen Guo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; (G.G.); (C.S.)
| | - Nanbo Wang
- Department of Psychology, School of Health, Fujian Medical University, Fuzhou 350122, China;
| | - Chu Sun
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; (G.G.); (C.S.)
| | - Haiyan Geng
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; (G.G.); (C.S.)
| |
Collapse
|
3
|
Crucianelli L, Reader AT, Ehrsson HH. Subcortical contributions to the sense of body ownership. Brain 2024; 147:390-405. [PMID: 37847057 PMCID: PMC10834261 DOI: 10.1093/brain/awad359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/01/2023] [Accepted: 10/03/2023] [Indexed: 10/18/2023] Open
Abstract
The sense of body ownership (i.e. the feeling that our body or its parts belong to us) plays a key role in bodily self-consciousness and is believed to stem from multisensory integration. Experimental paradigms such as the rubber hand illusion have been developed to allow the controlled manipulation of body ownership in laboratory settings, providing effective tools for investigating malleability in the sense of body ownership and the boundaries that distinguish self from other. Neuroimaging studies of body ownership converge on the involvement of several cortical regions, including the premotor cortex and posterior parietal cortex. However, relatively less attention has been paid to subcortical structures that may also contribute to body ownership perception, such as the cerebellum and putamen. Here, on the basis of neuroimaging and neuropsychological observations, we provide an overview of relevant subcortical regions and consider their potential role in generating and maintaining a sense of ownership over the body. We also suggest novel avenues for future research targeting the role of subcortical regions in making sense of the body as our own.
Collapse
Affiliation(s)
- Laura Crucianelli
- Department of Biological and Experimental Psychology, Queen Mary University of London, London E1 4DQ, UK
- Department of Neuroscience, Karolinska Institutet, Stockholm 171 65, Sweden
| | - Arran T Reader
- Department of Psychology, Faculty of Natural Sciences, University of Stirling, Stirling FK9 4LA, UK
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm 171 65, Sweden
| |
Collapse
|
4
|
Park M, Blake R, Kim CY. Audiovisual interactions outside of visual awareness during motion adaptation. Neurosci Conscious 2024; 2024:niad027. [PMID: 38292024 PMCID: PMC10823907 DOI: 10.1093/nc/niad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/01/2024] Open
Abstract
Motion aftereffects (MAEs), illusory motion experienced in a direction opposed to real motion experienced during prior adaptation, have been used to assess audiovisual interactions. In a previous study from our laboratory, we demonstrated that a congruent direction of auditory motion presented concurrently with visual motion during adaptation strengthened the consequent visual MAE, compared to when auditory motion was incongruent in direction. Those judgments of MAE strength, however, could have been influenced by expectations or response bias from mere knowledge of the state of audiovisual congruity during adaptation. To prevent such knowledge, we now employed continuous flash suppression to render visual motion perceptually invisible during adaptation, ensuring that observers were completely unaware of visual adapting motion and only aware of the motion direction of the sound they were hearing. We found a small but statistically significant congruence effect of sound on adaptation strength produced by invisible adaptation motion. After considering alternative explanations for this finding, we conclude that auditory motion can impact the strength of visual processing produced by translational visual motion even when that motion transpires outside of awareness.
Collapse
Affiliation(s)
- Minsun Park
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, PMB 407817 2301 Vanderbilt Place, Nashville, TN 37240-7817, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| |
Collapse
|
5
|
Shan L, Yuan L, Zhang B, Ma J, Xu X, Gu F, Jiang Y, Dai J. Neural Integration of Audiovisual Sensory Inputs in Macaque Amygdala and Adjacent Regions. Neurosci Bull 2023; 39:1749-1761. [PMID: 36920645 PMCID: PMC10661144 DOI: 10.1007/s12264-023-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 03/16/2023] Open
Abstract
Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain. Subcortical regions such as the amygdala are involved in sensory processing including vision and audition, yet their roles in multisensory integration remain unclear. In this study, we systematically investigated the function of neurons in the amygdala and adjacent regions in integrating audiovisual sensory inputs using a semi-chronic multi-electrode array and multiple combinations of audiovisual stimuli. From a sample of 332 neurons, we showed the diverse response patterns to audiovisual stimuli and the neural characteristics of bimodal over unimodal modulation, which could be classified into four types with differentiated regional origins. Using the hierarchical clustering method, neurons were further clustered into five groups and associated with different integrating functions and sub-regions. Finally, regions distinguishing congruent and incongruent bimodal sensory inputs were identified. Overall, visual processing dominates audiovisual integration in the amygdala and adjacent regions. Our findings shed new light on the neural mechanisms of multisensory integration in the primate brain.
Collapse
Affiliation(s)
- Liang Shan
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
| | - Liu Yuan
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Bo Zhang
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Brain Science, Zunyi Medical University, Zunyi, 563000, China
| | - Jian Ma
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiao Xu
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Fei Gu
- University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Jiang
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| | - Ji Dai
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Shenzhen Technological Research Center for Primate Translational Medicine, Shenzhen, 518055, China.
| |
Collapse
|
6
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
7
|
Stockert A, Schwartze M, Poeppel D, Anwander A, Kotz SA. Temporo-cerebellar connectivity underlies timing constraints in audition. eLife 2021; 10:67303. [PMID: 34542407 PMCID: PMC8480974 DOI: 10.7554/elife.67303] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 09/09/2021] [Indexed: 12/26/2022] Open
Abstract
The flexible and efficient adaptation to dynamic, rapid changes in the auditory environment likely involves generating and updating of internal models. Such models arguably exploit connections between the neocortex and the cerebellum, supporting proactive adaptation. Here, we tested whether temporo-cerebellar disconnection is associated with the processing of sound at short timescales. First, we identify lesion-specific deficits for the encoding of short timescale spectro-temporal non-speech and speech properties in patients with left posterior temporal cortex stroke. Second, using lesion-guided probabilistic tractography in healthy participants, we revealed bidirectional temporo-cerebellar connectivity with cerebellar dentate nuclei and crura I/II. These findings support the view that the encoding and modeling of rapidly modulated auditory spectro-temporal properties can rely on a temporo-cerebellar interface. We discuss these findings in view of the conjecture that proactive adaptation to a dynamic environment via internal models is a generalizable principle.
Collapse
Affiliation(s)
- Anika Stockert
- Language and Aphasia Laboratory, Department of Neurology, Leipzig University Hospital, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Michael Schwartze
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - David Poeppel
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany.,Department of Psychology, New York University, New York, United States
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sonja A Kotz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
8
|
Mioli A, Diolaiuti F, Zangrandi A, Orsini P, Sebastiani L, Santarcangelo EL. Multisensory Integration Is Modulated by Hypnotizability. Int J Clin Exp Hypn 2021; 69:215-224. [PMID: 33560171 DOI: 10.1080/00207144.2021.1877089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
This study investigated multisensory integration in 29 medium-to-high (mid-highs) and 24 low-to-medium (mid-lows) hypnotizable individuals, classified according to the Stanford Hypnotic Susceptibility Scale, Form A. Participants completed a simultaneity judgment (SJ) task, where an auditory and a visual stimulus were presented in close proximity to their body in a range of 11 stimulus onset asynchronies. Results show that mid-highs were prone to judge audiovisual stimuli as simultaneous over a wider range of time intervals between sensory stimuli, as expressed by a broader temporal binding window, when the visual stimulus precedes the auditory one. No significant difference was observed for response times. Findings indicate a role of hypnotizability in multisensory integration likely due to the highs' cerebellar peculiarities and/or sensory modality preference.
Collapse
Affiliation(s)
- Alessandro Mioli
- Department of Medicine and Surgery, Research Unit of Neurophysiology and Neuroengineering of Human-Technology Interaction, Rome, Italy
| | - Francesca Diolaiuti
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Andrea Zangrandi
- Department of Medicine and Surgery, Research Unit of Neurophysiology and Neuroengineering of Human-Technology Interaction, Rome, Italy.,Clinical Neuropsychology, Cognitive Disorders and Dyslexia Unit, Department of Neuro-Motor Diseases, Azienda Unità Sanitaria Locale - IRCCS, Reggio Emilia, Italy
| | - Paolo Orsini
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Laura Sebastiani
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Enrica L Santarcangelo
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| |
Collapse
|
9
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
10
|
Park M, Blake R, Kim Y, Kim CY. Congruent audio-visual stimulation during adaptation modulates the subsequently experienced visual motion aftereffect. Sci Rep 2019; 9:19391. [PMID: 31852921 PMCID: PMC6920416 DOI: 10.1038/s41598-019-54894-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 11/11/2019] [Indexed: 11/11/2022] Open
Abstract
Sensory information registered in one modality can influence perception associated with sensory information registered in another modality. The current work focuses on one particularly salient form of such multisensory interaction: audio-visual motion perception. Previous studies have shown that watching visual motion and listening to auditory motion influence each other, but results from those studies are mixed with regard to the nature of the interactions promoting that influence and where within the sequence of information processing those interactions transpire. To address these issues, we investigated whether (i) concurrent audio-visual motion stimulation during an adaptation phase impacts the strength of the visual motion aftereffect (MAE) during a subsequent test phase, and (ii) whether the magnitude of that impact was dependent on the congruence between auditory and visual motion experienced during adaptation. Results show that congruent direction of audio-visual motion during adaptation induced a stronger initial impression and a slower decay of the MAE than did the incongruent direction, which is not attributable to differential patterns of eye movements during adaptation. The audio-visual congruency effects measured here imply that visual motion perception emerges from integration of audio-visual motion information at a sensory neural stage of processing.
Collapse
Affiliation(s)
- Minsun Park
- Department of Psychology, Korea University, Seoul, 02841, Korea
| | - Randolph Blake
- Department of Psychology and Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, 37240, USA.
| | - Yeseul Kim
- Department of Psychology, Korea University, Seoul, 02841, Korea
| | - Chai-Youn Kim
- Department of Psychology, Korea University, Seoul, 02841, Korea.
| |
Collapse
|
11
|
Cortical processes underlying the effects of static sound timing on perceived visual speed. Neuroimage 2019; 199:194-205. [DOI: 10.1016/j.neuroimage.2019.05.062] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 04/09/2019] [Accepted: 05/24/2019] [Indexed: 01/10/2023] Open
|
12
|
Schaffert N, Janzen TB, Mattes K, Thaut MH. A Review on the Relationship Between Sound and Movement in Sports and Rehabilitation. Front Psychol 2019; 10:244. [PMID: 30809175 PMCID: PMC6379478 DOI: 10.3389/fpsyg.2019.00244] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 01/24/2019] [Indexed: 12/19/2022] Open
Abstract
The role of auditory information on perceptual-motor processes has gained increased interest in sports and psychology research in recent years. Numerous neurobiological and behavioral studies have demonstrated the close interaction between auditory and motor areas of the brain, and the importance of auditory information for movement execution, control, and learning. In applied research, artificially produced acoustic information and real-time auditory information have been implemented in sports and rehabilitation to improve motor performance in athletes, healthy individuals, and patients affected by neurological or movement disorders. However, this research is scattered both across time and scientific disciplines. The aim of this paper is to provide an overview about the interaction between movement and sound and review the current literature regarding the effect of natural movement sounds, movement sonification, and rhythmic auditory information in sports and motor rehabilitation. The focus here is threefold: firstly, we provide an overview of empirical studies using natural movement sounds and movement sonification in sports. Secondly, we review recent clinical and applied studies using rhythmic auditory information and sonification in rehabilitation, addressing in particular studies on Parkinson's disease and stroke. Thirdly, we summarize current evidence regarding the cognitive mechanisms and neural correlates underlying the processing of auditory information during movement execution and its mental representation. The current state of knowledge here reviewed provides evidence of the feasibility and effectiveness of the application of auditory information to improve movement execution, control, and (re)learning in sports and motor rehabilitation. Findings also corroborate the critical role of auditory information in auditory-motor coupling during motor (re)learning and performance, suggesting that this area of clinical and applied research has a large potential that is yet to be fully explored.
Collapse
Affiliation(s)
- Nina Schaffert
- Department of Movement and Training Science, Institute for Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Thenille Braun Janzen
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
| | - Klaus Mattes
- Department of Movement and Training Science, Institute for Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Michael H. Thaut
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
13
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
14
|
Chaplin TA, Allitt BJ, Hagan MA, Rosa MGP, Rajan R, Lui LL. Auditory motion does not modulate spiking activity in the middle temporal and medial superior temporal visual areas. Eur J Neurosci 2018; 48:2013-2029. [PMID: 30019438 DOI: 10.1111/ejn.14071] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 06/27/2018] [Accepted: 07/07/2018] [Indexed: 12/29/2022]
Abstract
The integration of multiple sensory modalities is a key aspect of brain function, allowing animals to take advantage of concurrent sources of information to make more accurate perceptual judgments. For many years, multisensory integration in the cerebral cortex was deemed to occur only in high-level "polysensory" association areas. However, more recent studies have suggested that cross-modal stimulation can also influence neural activity in areas traditionally considered to be unimodal. In particular, several human neuroimaging studies have reported that extrastriate areas involved in visual motion perception are also activated by auditory motion, and may integrate audiovisual motion cues. However, the exact nature and extent of the effects of auditory motion on the visual cortex have not been studied at the single neuron level. We recorded the spiking activity of neurons in the middle temporal (MT) and medial superior temporal (MST) areas of anesthetized marmoset monkeys upon presentation of unimodal stimuli (moving auditory or visual patterns), as well as bimodal stimuli (concurrent audiovisual motion). Despite robust, direction selective responses to visual motion, none of the sampled neurons responded to auditory motion stimuli. Moreover, concurrent moving auditory stimuli had no significant effect on the ability of single MT and MST neurons, or populations of simultaneously recorded neurons, to discriminate the direction of motion of visual stimuli (moving random dot patterns with varying levels of motion noise). Our findings do not support the hypothesis that direct interactions between MT, MST and areas low in the hierarchy of auditory areas underlie audiovisual motion integration.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Victoria, Australia
| | - Benjamin J Allitt
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Maureen A Hagan
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Victoria, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Victoria, Australia
| | - Ramesh Rajan
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Victoria, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Victoria, Australia
| |
Collapse
|
15
|
Dittrich S, Noesselt T. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments. Front Psychol 2018; 9:368. [PMID: 29618999 PMCID: PMC5871701 DOI: 10.3389/fpsyg.2018.00368] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 03/06/2018] [Indexed: 11/24/2022] Open
Abstract
Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.
Collapse
Affiliation(s)
- Sandra Dittrich
- Department of Biological Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Tömme Noesselt
- Department of Biological Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
16
|
Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection. eNeuro 2018; 5:eN-NWR-0294-17. [PMID: 29354682 PMCID: PMC5773885 DOI: 10.1523/eneuro.0294-17.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Revised: 12/30/2017] [Accepted: 01/04/2018] [Indexed: 11/21/2022] Open
Abstract
We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.
Collapse
|
17
|
Zou Z, Chau BKH, Ting KH, Chan CCH. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task. Front Aging Neurosci 2017; 9:374. [PMID: 29184494 PMCID: PMC5694625 DOI: 10.3389/fnagi.2017.00374] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Accepted: 11/01/2017] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration is an essential process that people employ daily, from conversing in social gatherings to navigating the nearby environment. The aim of this study was to investigate the impact of aging on modulating multisensory integrative processes using event-related potential (ERP), and the validity of the study was improved by including “noise” in the contrast conditions. Older and younger participants were involved in perceiving visual and/or auditory stimuli that contained spatial information. The participants responded by indicating the spatial direction (far vs. near and left vs. right) conveyed in the stimuli using different wrist movements. electroencephalograms (EEGs) were captured in each task trial, along with the accuracy and reaction time of the participants’ motor responses. Older participants showed a greater extent of behavioral improvements in the multisensory (as opposed to unisensory) condition compared to their younger counterparts. Older participants were found to have fronto-centrally distributed super-additive P2, which was not the case for the younger participants. The P2 amplitude difference between the multisensory condition and the sum of the unisensory conditions was found to correlate significantly with performance on spatial discrimination. The results indicated that the age-related effect modulated the integrative process in the perceptual and feedback stages, particularly the evaluation of auditory stimuli. Audiovisual (AV) integration may also serve a functional role during spatial-discrimination processes to compensate for the compromised attention function caused by aging.
Collapse
Affiliation(s)
- Zhi Zou
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Bolton K H Chau
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Kin-Hung Ting
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| |
Collapse
|
18
|
Rosemann S, Wefel IM, Elis V, Fahle M. Audio-visual interaction in visual motion detection: Synchrony versus Asynchrony. JOURNAL OF OPTOMETRY 2017; 10:242-251. [PMID: 28237358 PMCID: PMC5595265 DOI: 10.1016/j.optom.2016.12.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Revised: 11/17/2016] [Accepted: 12/09/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE Detection and identification of moving targets is of paramount importance in everyday life, even if it is not widely tested in optometric practice, mostly for technical reasons. There are clear indications in the literature that in perception of moving targets, vision and hearing interact, for example in noisy surrounds and in understanding speech. The main aim of visual perception, the ability that optometry aims to optimize, is the identification of objects, from everyday objects to letters, but also the spatial orientation of subjects in natural surrounds. To subserve this aim, corresponding visual and acoustic features from the rich spectrum of signals supplied by natural environments have to be combined. METHODS Here, we investigated the influence of an auditory motion stimulus on visual motion detection, both with a concrete (left/right movement) and an abstract auditory motion (increase/decrease of pitch). RESULTS We found that incongruent audiovisual stimuli led to significantly inferior detection compared to the visual only condition. Additionally, detection was significantly better in abstract congruent than incongruent trials. For the concrete stimuli the detection threshold was significantly better in asynchronous audiovisual conditions than in the unimodal visual condition. CONCLUSION We find a clear but complex pattern of partly synergistic and partly inhibitory audio-visual interactions. It seems that asynchrony plays only a positive role in audiovisual motion while incongruence mostly disturbs in simultaneous abstract configurations but not in concrete configurations. As in speech perception in hearing-impaired patients, patients suffering from visual deficits should be able to benefit from acoustic information.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany.
| | - Inga-Maria Wefel
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany
| | - Volkan Elis
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany
| | - Manfred Fahle
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany
| |
Collapse
|
19
|
Andric M, Davis B, Hasson U. Visual cortex signals a mismatch between regularity of auditory and visual streams. Neuroimage 2017; 157:648-659. [DOI: 10.1016/j.neuroimage.2017.05.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Revised: 04/14/2017] [Accepted: 05/15/2017] [Indexed: 10/19/2022] Open
|
20
|
Hidaka S, Higuchi S, Teramoto W, Sugita Y. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study. Acta Psychol (Amst) 2017; 178:66-72. [PMID: 28600968 DOI: 10.1016/j.actpsy.2017.05.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Revised: 05/17/2017] [Accepted: 05/25/2017] [Indexed: 10/19/2022] Open
Abstract
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM.
Collapse
|
21
|
Ronconi L, Casartelli L, Carna S, Molteni M, Arrigoni F, Borgatti R. When one is Enough: Impaired Multisensory Integration in Cerebellar Agenesis. Cereb Cortex 2017; 27:2041-2051. [PMID: 26946125 DOI: 10.1093/cercor/bhw049] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
In the last two decades, an intriguing shift in the understanding of the cerebellum has led to consider the nonmotor functions of this structure. Although various aspects of perceptual and sensory processing have been linked to the cerebellar activity, whether the cerebellum is essential for binding information from different sensory modalities remains uninvestigated. Multisensory integration (MSI) appears very early in the ontogenesis and is critical in several perceptual, cognitive, and social domains. For the first time, we investigated MSI in a rare case of cerebellar agenesis without any other associated brain malformations. To this aim, we measured reaction times (RTs) after the presentation of visual, auditory, and audiovisual stimuli. A group of neurotypical age-matched individuals was used as controls. Although we observed the typical advantage of the auditory modality relative to the visual modality in our patient, a clear impairment in MSI was found. Beyond the obvious prudence necessary for inferring definitive conclusions from this single-case picture, this finding is of interest in the light of reduced MSI abilities reported in several neurodevelopmental and psychiatric disorders-such as autism, dyslexia, and schizophrenia-in which the cerebellum has been implicated.
Collapse
Affiliation(s)
- L Ronconi
- Developmental and Cognitive Neuroscience Laboratory, Department of General Psychology, University of Padova, 35122 Padova, Italy.,Child Psychopathology Unit, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, 23842 Lecco, Italy
| | - L Casartelli
- Child Psychopathology Unit, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, 23842 Lecco, Italy.,Developmental Psychopathology Unit, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - S Carna
- Child Psychopathology Unit, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, 23842 Lecco, Italy.,Developmental Psychopathology Unit, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - M Molteni
- Child Psychopathology Unit, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, 23842 Lecco, Italy
| | | | - R Borgatti
- Neuropsychiatry and Neurorehabilitation Unit, Scientific Institute, IRCCSEugenio Medea, Bosisio Parini, 23842 Lecco, Italy
| |
Collapse
|
22
|
Kayser SJ, Philiastides MG, Kayser C. Sounds facilitate visual motion discrimination via the enhancement of late occipital visual representations. Neuroimage 2017; 148:31-41. [PMID: 28082107 PMCID: PMC5349847 DOI: 10.1016/j.neuroimage.2017.01.010] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 12/12/2016] [Accepted: 01/05/2017] [Indexed: 12/24/2022] Open
Abstract
Sensory discriminations, such as judgements about visual motion, often benefit from multisensory evidence. Despite many reports of enhanced brain activity during multisensory conditions, it remains unclear which dynamic processes implement the multisensory benefit for an upcoming decision in the human brain. Specifically, it remains difficult to attribute perceptual benefits to specific processes, such as early sensory encoding, the transformation of sensory representations into a motor response, or to more unspecific processes such as attention. We combined an audio-visual motion discrimination task with the single-trial mapping of dynamic sensory representations in EEG activity to localize when and where multisensory congruency facilitates perceptual accuracy. Our results show that a congruent sound facilitates the encoding of motion direction in occipital sensory - as opposed to parieto-frontal - cortices, and facilitates later - as opposed to early (i.e. below 100 ms) - sensory activations. This multisensory enhancement was visible as an earlier rise of motion-sensitive activity in middle-occipital regions about 350 ms from stimulus onset, which reflected the better discriminability of motion direction from brain activity and correlated with the perceptual benefit provided by congruent multisensory information. This supports a hierarchical model of multisensory integration in which the enhancement of relevant sensory cortical representations is transformed into a more accurate choice. Feature specific multisensory integration occurs in sensory not amodal cortex. Feature specific integration occurs late, i.e. around 350 ms post stimulus onset. Acoustic and visual representations interact in occipital motion regions.
Collapse
Affiliation(s)
- Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | | | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
23
|
Predicting the Multisensory Consequences of One's Own Action: BOLD Suppression in Auditory and Visual Cortices. PLoS One 2017; 12:e0169131. [PMID: 28060861 PMCID: PMC5218407 DOI: 10.1371/journal.pone.0169131] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Accepted: 12/12/2016] [Indexed: 11/19/2022] Open
Abstract
Predictive mechanisms are essential to successfully interact with the environment and to compensate for delays in the transmission of neural signals. However, whether and how we predict multisensory action outcomes remains largely unknown. Here we investigated the existence of multisensory predictive mechanisms in a context where actions have outcomes in different modalities. During fMRI data acquisition auditory, visual and auditory-visual stimuli were presented in active and passive conditions. In the active condition, a self-initiated button press elicited the stimuli with variable short delays (0-417ms) between action and outcome, and participants had to detect the presence of a delay for auditory or visual outcome (task modality). In the passive condition, stimuli appeared automatically, and participants had to detect the number of stimulus modalities (unimodal/bimodal). For action consequences compared to identical but unpredictable control stimuli we observed suppression of the blood oxygen level depended (BOLD) response in a broad network including bilateral auditory and visual cortices. This effect was independent of task modality or stimulus modality and strongest for trials where no delay was detected (undetected<detected). In bimodal vs. unimodal conditions we found activation differences in the left cerebellum for detected vs. undetected trials and an increased cerebellar-sensory cortex connectivity. Thus, action-related predictive mechanisms lead to BOLD suppression in multiple sensory brain regions. These findings support the hypothesis of multisensory predictive mechanisms, which are probably conducted in the left cerebellum.
Collapse
|
24
|
Maniglia M, Grassi M, Ward J. Sounds Are Perceived as Louder When Accompanied by Visual Movement. Multisens Res 2017. [DOI: 10.1163/22134808-00002569] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In this study, we present three experiments investigating the influence of visual movement on auditory judgements. In Experiments 1 and 2, two bursts of noise were presented and participants were required to judge which was louder using a forced-choice task. One of the two bursts was accompanied by a moving disc. The other burst either was accompanied by no visual stimulus (Experiment 1) or by a static disc (Experiment 2). When the two sounds were of identical intensity participants judged the sound accompanied by the moving disc as louder. The effect was greater when auditory stimuli were of the same intensity but it was still present for mid-to-high intensities. In a third, control, experiment participants judged the pitch (and not the loudness) of a pair of tones. Here the pattern was different: there was no effect of visual motion for sounds of the same pitch, with a reversed effect for mid-to-high pitch differences (the effect of motion lowered the pitch). This showed no shift of response towards the interval accompanied by the moving disc. In contrast, the effect on pitch was reversed in comparison to what observed for loudness, with mid-to-high frequency sound accompanied by motion rated as lower in pitch respect to the static intervals.The natural tendency for moving objects to elicit sounds may lead to an automatic perceptual influence of vision over sound particularly when the latter is ambiguous. This is the first account of this novel audio-visual interaction.
Collapse
Affiliation(s)
| | - Massimo Grassi
- Department of General Psychology, University of Padova, Italy
| | - Jamie Ward
- School of Psychology, University of Sussex, Falmer, Brighton, BN1 9QH, UK
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| |
Collapse
|
25
|
Effenberg AO, Fehse U, Schmitz G, Krueger B, Mechling H. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments. Front Neurosci 2016; 10:219. [PMID: 27303255 PMCID: PMC4883456 DOI: 10.3389/fnins.2016.00219] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Accepted: 05/02/2016] [Indexed: 12/19/2022] Open
Abstract
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.
Collapse
Affiliation(s)
- Alfred O Effenberg
- Faculty of Humanities, Institute of Sports Science, Leibniz Universität Hannover Hanover, Germany
| | - Ursula Fehse
- Faculty of Humanities, Institute of Sports Science, Leibniz Universität Hannover Hanover, Germany
| | - Gerd Schmitz
- Faculty of Humanities, Institute of Sports Science, Leibniz Universität Hannover Hanover, Germany
| | - Bjoern Krueger
- Computer Science, Faculty of Mathematics and Natural Sciences, Institute of Computer Science II, University of Bonn Bonn, Germany
| | - Heinz Mechling
- Institute of Sport Gerontology, German Sport University Cologne Cologne, Germany
| |
Collapse
|
26
|
Hidaka S, Teramoto W, Sugita Y. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review. Front Integr Neurosci 2015; 9:62. [PMID: 26733827 PMCID: PMC4686600 DOI: 10.3389/fnint.2015.00062] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/03/2015] [Indexed: 11/13/2022] Open
Abstract
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.
Collapse
Affiliation(s)
- Souta Hidaka
- Department of Psychology, Rikkyo University Saitama, Japan
| | - Wataru Teramoto
- Department of Psychology, Kumamoto University Kumamoto, Japan
| | - Yoichi Sugita
- Department of Psychology, Waseda University Tokyo, Japan
| |
Collapse
|
27
|
Grzeschik R, Lewald J, Verhey JL, Hoffmann MB, Getzmann S. Absence of direction-specific cross-modal visual-auditory adaptation in motion-onset event-related potentials. Eur J Neurosci 2015; 43:66-77. [PMID: 26469706 DOI: 10.1111/ejn.13102] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 09/10/2015] [Accepted: 10/08/2015] [Indexed: 11/28/2022]
Abstract
Adaptation to visual or auditory motion affects within-modality motion processing as reflected by visual or auditory free-field motion-onset evoked potentials (VEPs, AEPs). Here, a visual-auditory motion adaptation paradigm was used to investigate the effect of visual motion adaptation on VEPs and AEPs to leftward motion-onset test stimuli. Effects of visual adaptation to (i) scattered light flashes, and motion in the (ii) same or in the (iii) opposite direction of the test stimulus were compared. For the motion-onset VEPs, i.e. the intra-modal adaptation conditions, direction-specific adaptation was observed--the change-N2 (cN2) and change-P2 (cP2) amplitudes were significantly smaller after motion adaptation in the same than in the opposite direction. For the motion-onset AEPs, i.e. the cross-modal adaptation condition, there was an effect of motion history only in the change-P1 (cP1), and this effect was not direction-specific--cP1 was smaller after scatter than after motion adaptation to either direction. No effects were found for later components of motion-onset AEPs. While the VEP results provided clear evidence for the existence of a direction-specific effect of motion adaptation within the visual modality, the AEP findings suggested merely a motion-related, but not a direction-specific effect. In conclusion, the adaptation of veridical auditory motion detectors by visual motion is not reflected by the AEPs of the present study.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg, Leipziger Straße 44, Magdeburg, D-39120, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Auditory Cognitive Neuroscience Laboratory, Ruhr University Bochum, Bochum, Germany.,Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jesko L Verhey
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg, Leipziger Straße 44, Magdeburg, D-39120, Germany.,Department of Ophthalmology, Visual Processing Laboratory, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Michael B Hoffmann
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Department of Ophthalmology, Visual Processing Laboratory, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Stephan Getzmann
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
28
|
Krebber M, Harwood J, Spitzer B, Keil J, Senkowski D. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices. Neuroimage 2015; 117:160-9. [DOI: 10.1016/j.neuroimage.2015.05.056] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Revised: 04/15/2015] [Accepted: 05/19/2015] [Indexed: 11/16/2022] Open
|
29
|
Baumann O, Borra RJ, Bower JM, Cullen KE, Habas C, Ivry RB, Leggio M, Mattingley JB, Molinari M, Moulton EA, Paulin MG, Pavlova MA, Schmahmann JD, Sokolov AA. Consensus paper: the role of the cerebellum in perceptual processes. CEREBELLUM (LONDON, ENGLAND) 2015; 14:197-220. [PMID: 25479821 PMCID: PMC4346664 DOI: 10.1007/s12311-014-0627-7] [Citation(s) in RCA: 287] [Impact Index Per Article: 31.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Various lines of evidence accumulated over the past 30 years indicate that the cerebellum, long recognized as essential for motor control, also has considerable influence on perceptual processes. In this paper, we bring together experts from psychology and neuroscience, with the aim of providing a succinct but comprehensive overview of key findings related to the involvement of the cerebellum in sensory perception. The contributions cover such topics as anatomical and functional connectivity, evolutionary and comparative perspectives, visual and auditory processing, biological motion perception, nociception, self-motion, timing, predictive processing, and perceptual sequencing. While no single explanation has yet emerged concerning the role of the cerebellum in perceptual processes, this consensus paper summarizes the impressive empirical evidence on this problem and highlights diversities as well as commonalities between existing hypotheses. In addition to work with healthy individuals and patients with cerebellar disorders, it is also apparent that several neurological conditions in which perceptual disturbances occur, including autism and schizophrenia, are associated with cerebellar pathology. A better understanding of the involvement of the cerebellum in perceptual processes will thus likely be important for identifying and treating perceptual deficits that may at present go unnoticed and untreated. This paper provides a useful framework for further debate and empirical investigations into the influence of the cerebellum on sensory perception.
Collapse
Affiliation(s)
- Oliver Baumann
- Queensland Brain Institute, The University of Queensland, St. Lucia, Queensland, Australia,
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Paraskevopoulos E, Kuchenbuch A, Herholz SC, Foroglou N, Bamidis P, Pantev C. Tones and numbers: a combined EEG-MEG study on the effects of musical expertise in magnitude comparisons of audiovisual stimuli. Hum Brain Mapp 2014; 35:5389-400. [PMID: 24916460 DOI: 10.1002/hbm.22558] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2014] [Revised: 05/09/2014] [Accepted: 05/12/2014] [Indexed: 11/08/2022] Open
Abstract
This study investigated the cortical responses underlying magnitude comparisons of multisensory stimuli and examined the effect that musical expertise has in this process. The comparative judgments were based on a newly learned rule binding the auditory and visual stimuli within the context of magnitude comparisons: "the higher the pitch of the tone, the larger the number presented." The cortical responses were measured by simultaneous MEG\EEG recordings and a combined source analysis with individualized realistic head models was performed. Musical expertise effects were investigated by comparing musicians to non-musicians. Congruent audiovisual stimuli, corresponding to the newly learned rule, elicited activity in frontotemporal and occipital areas. In contrast, incongruent stimuli activated temporal and parietal regions. Musicians when compared with nonmusicians showed increased differences between congruent and incongruent stimuli in a prefrontal region, thereby indicating that music expertise may affect multisensory comparative judgments within a generalized representation of analog magnitude.
Collapse
Affiliation(s)
- Evangelos Paraskevopoulos
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany; Laboratory of Medical Physics, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece
| | | | | | | | | | | |
Collapse
|
31
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
32
|
Sensory and striatal areas integrate auditory and visual signals into behavioral benefits during motion discrimination. J Neurosci 2013; 33:8841-9. [PMID: 23678126 DOI: 10.1523/jneurosci.3020-12.2013] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
For effective interactions with our dynamic environment, it is critical for the brain to integrate motion information from the visual and auditory senses. Combining fMRI and psychophysics, this study investigated how the human brain integrates auditory and visual motion into benefits in motion discrimination. Subjects discriminated the motion direction of audiovisual stimuli that contained directional motion signal in the auditory, visual, audiovisual, or no modality at two levels of signal reliability. Therefore, this 2 × 2 × 2 factorial design manipulated: (1) auditory motion information (signal vs noise), (2) visual motion information (signal vs noise), and (3) reliability of motion signal (intact vs degraded). Behaviorally, subjects benefited significantly from audiovisual integration primarily for degraded auditory and visual motion signals while obtaining near ceiling performance for "unisensory" signals when these were reliable and intact. At the neural level, we show audiovisual motion integration bilaterally in the visual motion areas hMT+/V5+ and implicate the posterior superior temporal gyrus/planum temporale in auditory motion processing. Moreover, we show that the putamen integrates audiovisual signals into more accurate motion discrimination responses. Our results suggest audiovisual integration processes at both the sensory and response selection levels. In all of these regions, the operational profile of audiovisual integration followed the principle of inverse effectiveness, in which audiovisual response suppression for intact stimuli turns into response enhancements for degraded stimuli. This response profile parallels behavioral indices of audiovisual integration, in which subjects benefit significantly from audiovisual integration only for the degraded conditions.
Collapse
|
33
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Verhey JL, Hoffmann MB. Direction-specific adaptation of motion-onset auditory evoked potentials. Eur J Neurosci 2013; 38:2557-65. [PMID: 23725339 DOI: 10.1111/ejn.12264] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 04/12/2013] [Accepted: 04/26/2013] [Indexed: 11/26/2022]
Abstract
Auditory evoked potentials (AEPs) to motion onset in humans are dominated by a fronto-central complex, with a change-negative deflection 1 (cN1) and a change-positive deflection 2 (cP2) component. Here the contribution of veridical motion detectors to motion-onset AEPs was investigated with the hypothesis that direction-specific adaptation effects would indicate the contribution of such motion detectors. AEPs were recorded from 33 electroencephalographic channels to the test stimulus, i.e. motion onset of horizontal virtual auditory motion (60° per s) from straight ahead to the left. AEPs were compared in two experiments for three conditions, which differed in their history prior to the motion-onset test stimulus: (i) without motion history (Baseline), (ii) with motion history in the same direction as the test stimulus (Adaptation Same), and (iii) a reference condition with auditory history. For Experiment 1, condition (iii) comprised motion in the opposite direction (Adaptation Opposite). For Experiment 2, a noise in the absence of coherent motion (Matched Noise) was used as the reference condition. In Experiment 1, the amplitude difference cP2 - cN1 obtained for Adaptation Same was significantly smaller than for Baseline and Adaptation Opposite. In Experiment 2, it was significantly smaller than for Matched Noise. Adaptation effects were absent for cN1 and cP2 latencies. These findings demonstrate direction-specific adaptation of the motion-onset AEP. This suggests that veridical auditory motion detectors contribute to the motion-onset AEP.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Ophthalmology, Visual Processing Laboratory, Otto von Guericke University Magdeburg, Leipziger Strasse 44, 39120, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
34
|
Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech. PLoS One 2013; 8:e64489. [PMID: 23734202 PMCID: PMC3667190 DOI: 10.1371/journal.pone.0064489] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Accepted: 04/16/2013] [Indexed: 11/19/2022] Open
Abstract
Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI). An intersubject-correlation (ISC) map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC) analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.
Collapse
|
35
|
Ogawa A, Macaluso E. Audio-visual interactions for motion perception in depth modulate activity in visual area V3A. Neuroimage 2013; 71:158-67. [PMID: 23333414 DOI: 10.1016/j.neuroimage.2013.01.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 12/20/2012] [Accepted: 01/09/2013] [Indexed: 11/28/2022] Open
Abstract
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) "matched vs. unmatched" conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio-visual "congruent vs. incongruent" between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio-visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio-visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio-visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices.
Collapse
Affiliation(s)
- Akitoshi Ogawa
- Neuroimaging Laboratory, IRCCS, Santa Lucia Foundation, Via Ardeatina 306, Rome 00179, Italy.
| | | |
Collapse
|
36
|
Schmitz G, Mohammadi B, Hammer A, Heldmann M, Samii A, Münte TF, Effenberg AO. Observation of sonified movements engages a basal ganglia frontocortical network. BMC Neurosci 2013; 14:32. [PMID: 23496827 PMCID: PMC3602090 DOI: 10.1186/1471-2202-14-32] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2012] [Accepted: 03/07/2013] [Indexed: 11/25/2022] Open
Abstract
Background Producing sounds by a musical instrument can lead to audiomotor coupling, i.e. the joint activation of the auditory and motor system, even when only one modality is probed. The sonification of otherwise mute movements by sounds based on kinematic parameters of the movement has been shown to improve motor performance and perception of movements. Results Here we demonstrate in a group of healthy young non-athletes that congruently (sounds match visual movement kinematics) vs. incongruently (no match) sonified breaststroke movements of a human avatar lead to better perceptual judgement of small differences in movement velocity. Moreover, functional magnetic resonance imaging revealed enhanced activity in superior and medial posterior temporal regions including the superior temporal sulcus, known as an important multisensory integration site, as well as the insula bilaterally and the precentral gyrus on the right side. Functional connectivity analysis revealed pronounced connectivity of the STS with the basal ganglia and thalamus as well as frontal motor regions for the congruent stimuli. This was not seen to the same extent for the incongruent stimuli. Conclusions We conclude that sonification of movements amplifies the activity of the human action observation system including subcortical structures of the motor loop. Sonification may thus be an important method to enhance training and therapy effects in sports science and neurological rehabilitation.
Collapse
Affiliation(s)
- Gerd Schmitz
- Institute of Sports Science, University of Hannover, Hannover, Germany
| | | | | | | | | | | | | |
Collapse
|
37
|
Beer AL, Plank T, Meyer G, Greenlee MW. Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing. Front Integr Neurosci 2013; 7:5. [PMID: 23407860 PMCID: PMC3570774 DOI: 10.3389/fnint.2013.00005] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 01/25/2013] [Indexed: 11/22/2022] Open
Abstract
Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex.
Collapse
Affiliation(s)
- Anton L. Beer
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
- Experimental and Clinical Neurosciences Programme, Universität RegensburgRegensburg, Germany
| | - Tina Plank
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
| | - Georg Meyer
- Department of Experimental Psychology, University of LiverpoolLiverpool, UK
| | - Mark W. Greenlee
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
- Experimental and Clinical Neurosciences Programme, Universität RegensburgRegensburg, Germany
| |
Collapse
|
38
|
Multisensory Interactions during Motion Perception. Front Neurosci 2013. [DOI: 10.1201/9781439812174-37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] Open
|
39
|
Diaconescu AO, Hasher L, McIntosh AR. Visual dominance and multisensory integration changes with age. Neuroimage 2013; 65:152-66. [PMID: 23036447 DOI: 10.1016/j.neuroimage.2012.09.057] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2012] [Revised: 09/23/2012] [Accepted: 09/24/2012] [Indexed: 10/27/2022] Open
|
40
|
Tyll S, Bonath B, Schoenfeld MA, Heinze HJ, Ohl FW, Noesselt T. Neural basis of multisensory looming signals. Neuroimage 2013; 65:13-22. [DOI: 10.1016/j.neuroimage.2012.09.056] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 09/03/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|
41
|
Sokolov AA, Erb M, Grodd W, Pavlova MA. Structural Loop Between the Cerebellum and the Superior Temporal Sulcus: Evidence from Diffusion Tensor Imaging. Cereb Cortex 2012; 24:626-32. [DOI: 10.1093/cercor/bhs346] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
|
42
|
Sang L, Qin W, Liu Y, Han W, Zhang Y, Jiang T, Yu C. Resting-state functional connectivity of the vermal and hemispheric subregions of the cerebellum with both the cerebral cortical networks and subcortical structures. Neuroimage 2012; 61:1213-25. [PMID: 22525876 DOI: 10.1016/j.neuroimage.2012.04.011] [Citation(s) in RCA: 187] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2012] [Revised: 04/04/2012] [Accepted: 04/06/2012] [Indexed: 11/28/2022] Open
Abstract
The human cerebellum is a heterogeneous structure, and the pattern of resting-state functional connectivity (rsFC) of each subregion has not yet been fully characterized. We aimed to systematically investigate rsFC pattern of each cerebellar subregion in 228 healthy young adults. Voxel-based analysis revealed that several subregions showed similar rsFC patterns, reflecting functional integration; however, different subregions displayed distinct rsFC patterns, representing functional segregation. The same vermal and hemispheric subregions showed either different patterns or different strengths of rsFCs with the cerebrum, and different subregions of lobules VII and VIII displayed different rsFC patterns. Region of interest (ROI)-based analyses also confirmed these findings. Specifically, strong rsFCs were found: between lobules I-VI and vermal VIIb-IX and the visual network; between hemispheric VI, VIIb, VIIIa and the auditory network; between lobules I-VI, VIII and the sensorimotor network; between lobule IX, vermal VIIIb and the default-mode network; between lobule Crus I, hemispheric Crus II and the fronto-parietal network; between hemispheric VIIb, VIII and the task-positive network; between hemispheric VI, VIIb, VIII and the salience network; between most cerebellar subregions and the thalamus; between lobules V, VIIb and the midbrain red nucleus; between hemispheric Crus I, Crus II, vermal VIIIb, IX and the caudate nucleus; between lobules V, VI, VIIb, VIIIa and the pallidum and putamen; and between lobules I-V, hemispheric VIII, IX and the hippocampus and amygdala. These results confirm the existence of both functional integration and segregation among cerebellar subregions and largely improve our understanding of the functional organization of the human cerebellum.
Collapse
Affiliation(s)
- Li Sang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052, China
| | | | | | | | | | | | | |
Collapse
|
43
|
Bamiou DE, Werring D, Cox K, Stevens J, Musiek FE, Brown MM, Luxon LM. Patient-reported auditory functions after stroke of the central auditory pathway. Stroke 2012; 43:1285-9. [PMID: 22382162 DOI: 10.1161/strokeaha.111.644039] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND PURPOSE Auditory functional limitations experienced by patients after stroke of the central auditory pathways remain underinvestigated. Purpose- To measure patient-reported hearing difficulties in everyday life in nonaphasic patients with stroke of the auditory brain versus normal control subjects. To examine how hearing difficulties correlate with auditory tests and site of lesion in individual cases. METHODS We recruited 21 individuals with auditory brain stroke (excluding those with aphasia) diagnosed on the basis of a brain MRI conducted 1 to 2 weeks after the stroke and assessed in the chronic stage of stroke. Twenty-three controls matched for age and hearing were also recruited. All subjects completed the Amsterdam Inventory for Auditory Disability (consisting of subscales of sound detection, recognition, localization, speech in quiet, speech in noise) and underwent baseline audiometry and central auditory processing tests (dichotic digits, frequency and duration patterns, gaps in noise). RESULTS Sound recognition and localization subscores of the inventory were significantly worse in case subjects versus control subjects, with severe and significant functional limitation (z score >3) reported by 9 out of 21 case subjects. None of the inventory subscales correlated with audiometric thresholds, but localization and recognition subscales showed a moderate to strong correlation with dichotic digits (left ear) and pattern tests. CONCLUSIONS A substantial proportion of patients may experience and report severe auditory functional limitations not limited to speech sounds after stroke of the auditory brain. A hearing questionnaire may help identify patients who require more extensive assessment to inform rehabilitation plans.
Collapse
Affiliation(s)
- Doris-Eva Bamiou
- Neuro-otology Department, National Hospital for Neurology and Neurosurgery, London, UK.
| | | | | | | | | | | | | |
Collapse
|
44
|
Wuerger SM, Parkes L, Lewis PA, Crocker-Buque A, Rutschmann R, Meyer GF. Premotor Cortex Is Sensitive to Auditory–Visual Congruence for Biological Motion. J Cogn Neurosci 2012; 24:575-87. [PMID: 22126670 PMCID: PMC7614374 DOI: 10.1162/jocn_a_00173] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Collapse
|
45
|
Li X, Ge X, Sun J, Tong S. Locating the sources for cross-modal interactions and decision making during judging the visual-affected auditory intensity change. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:3067-70. [PMID: 22254987 DOI: 10.1109/iembs.2011.6090838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Audiovisual interaction has been one of the most important topics in cognitive neurosciences. Visual stimuli could significantly impact the auditory perception, and vice versa. Nevertheless, how much the change in visual stimuli would influence the perception of auditory change remains to be investigated. In this paper, we designed an audiovisual experiment in which subjects were required to judge whether there is a change in the intensities of two sounds with 150 ms interval, while there are two simultaneously presented size-changed visual stimuli. Behavioral results demonstrated that incongruent audiovisual change could result in the illusory perception of the change in sound intensity. For the correctly judged trials, source analysis showed two characteristic windows post the first auditory stimulus, i.e., (i) the 160-200 ms window including the auditory P200 and visual N100 wave, which was related to audiovisual interaction and working memory of the first stimulus with localized sources in insula and agranular retrolimbic area; and (ii) the 300-400 ms window for P300 with sources in premotor cortex and caudate nucleus, which were related to later audiovisual interaction, change discrimination and working memory. These preliminary results implied two stages in the audiovisual change perception task, with the involvement of insula, agranular retrolimbic, premotor cortex and caudate nucleus.
Collapse
Affiliation(s)
- Xuan Li
- Med-X Research Institute, Shanghai Jiao Tong University, Shanghai 200030, China
| | | | | | | |
Collapse
|
46
|
Sokolov AA, Erb M, Gharabaghi A, Grodd W, Tatagiba MS, Pavlova MA. Biological motion processing: the left cerebellum communicates with the right superior temporal sulcus. Neuroimage 2011; 59:2824-30. [PMID: 22019860 DOI: 10.1016/j.neuroimage.2011.08.039] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2011] [Revised: 07/21/2011] [Accepted: 08/15/2011] [Indexed: 11/17/2022] Open
Abstract
The cerebellum is thought to be engaged not only in motor control, but also in the neural network dedicated to visual processing of body motion. However, the pattern of connectivity within this network, in particular, between the cortical circuitry for observation of others' actions and the cerebellum remains largely unknown. By combining functional magnetic resonance imaging (fMRI) with functional connectivity analysis and dynamic causal modelling (DCM), we assessed cerebro-cerebellar connectivity during a visual perceptual task with point-light displays depicting human locomotion. In the left lateral cerebellum, regions in the lobules Crus I and VIIB exhibited increased fMRI response to biological motion. The outcome of the connectivity analyses delivered the first evidence for reciprocal communication between the left lateral cerebellum and the right posterior superior temporal sulcus (STS). Through communication with the right posterior STS that is a key node not only for biological motion perception but also for social interaction and visual tasks on theory of mind, the left cerebellum might be involved in a wide range of social cognitive functions.
Collapse
Affiliation(s)
- Arseny A Sokolov
- Department of Neurosurgery, University of Tübingen Medical School, and Developmental Cognitive and Social Neuroscience Unit, Department of Pediatric Neurology and Child Development, Children's Hospital, Tübingen, Germany.
| | | | | | | | | | | |
Collapse
|
47
|
Diaconescu AO, Alain C, McIntosh AR. The co-occurrence of multisensory facilitation and cross-modal conflict in the human brain. J Neurophysiol 2011; 106:2896-909. [PMID: 21880944 DOI: 10.1152/jn.00303.2011] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to a specific object. Continued exposure to cross-modal events sets up expectations about what a given object most likely "sounds" like, and vice versa, thereby facilitating object detection and recognition. The binding of familiar auditory and visual signatures is referred to as semantic, multisensory integration. Whereas integration of semantically related cross-modal features is behaviorally advantageous, situations of sensory dominance of one modality at the expense of another impair performance. In the present study, magnetoencephalography recordings of semantically related cross-modal and unimodal stimuli captured the spatiotemporal patterns underlying multisensory processing at multiple stages. At early stages, 100 ms after stimulus onset, posterior parietal brain regions responded preferentially to cross-modal stimuli irrespective of task instructions or the degree of semantic relatedness between the auditory and visual components. As participants were required to classify cross-modal stimuli into semantic categories, activity in superior temporal and posterior cingulate cortices increased between 200 and 400 ms. As task instructions changed to incorporate cross-modal conflict, a process whereby auditory and visual components of cross-modal stimuli were compared to estimate their degree of congruence, multisensory processes were captured in parahippocampal, dorsomedial, and orbitofrontal cortices 100 and 400 ms after stimulus onset. Our results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset. However, as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, multisensory processes extend in cingulate, temporal, and prefrontal cortices.
Collapse
|
48
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/9781439812174-10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
49
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/b11092-10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
50
|
Soto-Faraco S, Väljamäe A. Multisensory Interactions during Motion Perception. Front Neurosci 2011. [DOI: 10.1201/b11092-37] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|