1
|
Peng B, Huang JJ, Li Z, Zhang LI, Tao HW. Cross-modal enhancement of defensive behavior via parabigemino-collicular projections. Curr Biol 2024:S0960-9822(24)00836-4. [PMID: 39019036 DOI: 10.1016/j.cub.2024.06.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/19/2024] [Accepted: 06/20/2024] [Indexed: 07/19/2024]
Abstract
Effective detection and avoidance from environmental threats are crucial for animals' survival. Integration of sensory cues associated with threats across different modalities can significantly enhance animals' detection and behavioral responses. However, the neural circuit-level mechanisms underlying the modulation of defensive behavior or fear response under simultaneous multimodal sensory inputs remain poorly understood. Here, we report in mice that bimodal looming stimuli combining coherent visual and auditory signals elicit more robust defensive/fear reactions than unimodal stimuli. These include intensified escape and prolonged hiding, suggesting a heightened defensive/fear state. These various responses depend on the activity of the superior colliculus (SC), while its downstream nucleus, the parabigeminal nucleus (PBG), predominantly influences the duration of hiding behavior. PBG temporally integrates visual and auditory signals and enhances the salience of threat signals by amplifying SC sensory responses through its feedback projection to the visual layer of the SC. Our results suggest an evolutionarily conserved pathway in defense circuits for multisensory integration and cross-modality enhancement.
Collapse
Affiliation(s)
- Bo Peng
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA 90089, USA
| | - Junxiang J Huang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Graduate Program in Biomedical and Biological Sciences, University of Southern California, Los Angeles, CA 90033, USA
| | - Zhong Li
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Li I Zhang
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| | - Huizhong Whit Tao
- Zilkha Neurogenetic Institute, Center for Neural Circuits and Sensory Processing Disorders, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; Department of Physiology and Neuroscience, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA.
| |
Collapse
|
2
|
Bolam J, Diaz JA, Andrews M, Coats RO, Philiastides MG, Astill SL, Delis I. A drift diffusion model analysis of age-related impact on multisensory decision-making processes. Sci Rep 2024; 14:14895. [PMID: 38942761 PMCID: PMC11213863 DOI: 10.1038/s41598-024-65549-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 06/20/2024] [Indexed: 06/30/2024] Open
Abstract
Older adults (OAs) are typically slower and/or less accurate in forming perceptual choices relative to younger adults. Despite perceptual deficits, OAs gain from integrating information across senses, yielding multisensory benefits. However, the cognitive processes underlying these seemingly discrepant ageing effects remain unclear. To address this knowledge gap, 212 participants (18-90 years old) performed an online object categorisation paradigm, whereby age-related differences in Reaction Times (RTs) and choice accuracy between audiovisual (AV), visual (V), and auditory (A) conditions could be assessed. Whereas OAs were slower and less accurate across sensory conditions, they exhibited greater RT decreases between AV and V conditions, showing a larger multisensory benefit towards decisional speed. Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants' behaviour to probe age-related impacts on the latent multisensory decision formation processes. For OAs, HDDM demonstrated slower evidence accumulation rates across sensory conditions coupled with increased response caution for AV trials of higher difficulty. Notably, for trials of lower difficulty we found multisensory benefits in evidence accumulation that increased with age, but not for trials of higher difficulty, in which increased response caution was instead evident. Together, our findings reconcile age-related impacts on multisensory decision-making, indicating greater multisensory evidence accumulation benefits with age underlying enhanced decisional speed.
Collapse
Affiliation(s)
- Joshua Bolam
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
- Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| | - Jessica A Diaz
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
- School of Social Sciences, Birmingham City University, West Midlands, B15 3HE, UK
| | - Mark Andrews
- School of Social Sciences, Nottingham Trent University, Nottinghamshire, NG1 4FQ, UK
| | - Rachel O Coats
- School of Psychology, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Marios G Philiastides
- School of Neuroscience and Psychology, University of Glasgow, Lanarkshire, G12 8QB, UK
| | - Sarah L Astill
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
| |
Collapse
|
3
|
Ma S, Zhou Y, Wan T, Ren Q, Yan J, Fan L, Yuan H, Chan M, Chai Y. Bioinspired In-Sensor Multimodal Fusion for Enhanced Spatial and Spatiotemporal Association. NANO LETTERS 2024; 24:7091-7099. [PMID: 38804877 DOI: 10.1021/acs.nanolett.4c01727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Multimodal perception can capture more precise and comprehensive information compared with unimodal approaches. However, current sensory systems typically merge multimodal signals at computing terminals following parallel processing and transmission, which results in the potential loss of spatial association information and requires time stamps to maintain temporal coherence for time-series data. Here we demonstrate bioinspired in-sensor multimodal fusion, which effectively enhances comprehensive perception and reduces the level of data transfer between sensory terminal and computation units. By adopting floating gate phototransistors with reconfigurable photoresponse plasticity, we realize the agile spatial and spatiotemporal fusion under nonvolatile and volatile photoresponse modes. To realize an optimal spatial estimation, we integrate spatial information from visual-tactile signals. For dynamic events, we capture and fuse in real time spatiotemporal information from visual-audio signals, realizing a dance-music synchronization recognition task without a time-stamping process. This in-sensor multimodal fusion approach provides the potential to simplify the multimodal integration system, extending the in-sensor computing paradigm.
Collapse
Affiliation(s)
- Sijie Ma
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Yue Zhou
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Tianqing Wan
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Qinqi Ren
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Jianmin Yan
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Lingwei Fan
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Huanmei Yuan
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, People's Republic of China
| | - Mansun Chan
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, People's Republic of China
| | - Yang Chai
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| |
Collapse
|
4
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
5
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
6
|
Kayser C, Heuer H. Multisensory perception depends on the reliability of the type of judgment. J Neurophysiol 2024; 131:723-737. [PMID: 38416720 DOI: 10.1152/jn.00451.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/05/2024] [Accepted: 02/24/2024] [Indexed: 03/01/2024] Open
Abstract
The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
7
|
Kreyenmeier P, Bhuiyan I, Gian M, Chow HM, Spering M. Smooth pursuit inhibition reveals audiovisual enhancement of fast movement control. J Vis 2024; 24:3. [PMID: 38558158 PMCID: PMC10996987 DOI: 10.1167/jov.24.4.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/03/2024] [Indexed: 04/04/2024] Open
Abstract
The sudden onset of a visual object or event elicits an inhibition of eye movements at latencies approaching the minimum delay of visuomotor conductance in the brain. Typically, information presented via multiple sensory modalities, such as sound and vision, evokes stronger and more robust responses than unisensory information. Whether and how multisensory information affects ultra-short latency oculomotor inhibition is unknown. In two experiments, we investigate smooth pursuit and saccadic inhibition in response to multisensory distractors. Observers tracked a horizontally moving dot and were interrupted by an unpredictable visual, auditory, or audiovisual distractor. Distractors elicited a transient inhibition of pursuit eye velocity and catch-up saccade rate within ∼100 ms of their onset. Audiovisual distractors evoked stronger oculomotor inhibition than visual- or auditory-only distractors, indicating multisensory response enhancement. Multisensory response enhancement magnitudes were equal to the linear sum of responses to component stimuli. These results demonstrate that multisensory information affects eye movements even at ultra-short latencies, establishing a lower time boundary for multisensory-guided behavior. We conclude that oculomotor circuits must have privileged access to sensory information from multiple modalities, presumably via a fast, subcortical pathway.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Ishmam Bhuiyan
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mathew Gian
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hiu Mei Chow
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, BC, Vancouver, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
8
|
Ichikawa K, Kaneko K. Bayesian inference is facilitated by modular neural networks with different time scales. PLoS Comput Biol 2024; 20:e1011897. [PMID: 38478575 PMCID: PMC10962854 DOI: 10.1371/journal.pcbi.1011897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/25/2024] [Accepted: 02/06/2024] [Indexed: 03/26/2024] Open
Abstract
Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference by the brain, the prior distribution must be acquired and represented by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. Our findings reveal that networks with modular structures, composed of fast and slow modules, are adept at representing this prior distribution, enabling more accurate Bayesian inferences. Specifically, the modular network that consists of a main module connected with input and output layers and a sub-module with slower neural activity connected only with the main module outperformed networks with uniform time scales. Prior information was represented specifically by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the neural network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure and the division of roles in which prior knowledge is selectively represented in the slow sub-modules spontaneously emerged. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.
Collapse
Affiliation(s)
- Kohei Ichikawa
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Meguro-ku, Tokyo, Japan
| | - Kunihiko Kaneko
- Research Center for Complex Systems Biology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
- The Niels Bohr Institute, University of Copenhagen, Blegdamsvej, Copenhagen, Denmark
| |
Collapse
|
9
|
Park M, Blake R, Kim CY. Audiovisual interactions outside of visual awareness during motion adaptation. Neurosci Conscious 2024; 2024:niad027. [PMID: 38292024 PMCID: PMC10823907 DOI: 10.1093/nc/niad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 12/05/2023] [Accepted: 12/27/2023] [Indexed: 02/01/2024] Open
Abstract
Motion aftereffects (MAEs), illusory motion experienced in a direction opposed to real motion experienced during prior adaptation, have been used to assess audiovisual interactions. In a previous study from our laboratory, we demonstrated that a congruent direction of auditory motion presented concurrently with visual motion during adaptation strengthened the consequent visual MAE, compared to when auditory motion was incongruent in direction. Those judgments of MAE strength, however, could have been influenced by expectations or response bias from mere knowledge of the state of audiovisual congruity during adaptation. To prevent such knowledge, we now employed continuous flash suppression to render visual motion perceptually invisible during adaptation, ensuring that observers were completely unaware of visual adapting motion and only aware of the motion direction of the sound they were hearing. We found a small but statistically significant congruence effect of sound on adaptation strength produced by invisible adaptation motion. After considering alternative explanations for this finding, we conclude that auditory motion can impact the strength of visual processing produced by translational visual motion even when that motion transpires outside of awareness.
Collapse
Affiliation(s)
- Minsun Park
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, PMB 407817 2301 Vanderbilt Place, Nashville, TN 37240-7817, United States
| | - Chai-Youn Kim
- School of Psychology, Korea University, 145, Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
| |
Collapse
|
10
|
Matsui R, Aoyama T, Kato K, Hasegawa Y. Real-time motion force-feedback system with predictive-vision for improving motor accuracy. Sci Rep 2024; 14:2168. [PMID: 38272970 PMCID: PMC10810826 DOI: 10.1038/s41598-024-52811-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 01/23/2024] [Indexed: 01/27/2024] Open
Abstract
Many haptic guidance systems have been studied over the years; however, most of them have been limited to predefined guidance methods. Calculating guidance according to the operator's motion is important for efficient human motor adaptation and learning. In this study, we developed a system that haptically provides guidance trajectory by sequential weighting between the operator's trajectory and the ideal trajectory calculated from a predictive-vision system. We investigated whether motion completion with a predictive-vision system affects human motor accuracy and adaptation in time-constrained goal-directed reaching and ball-hitting tasks through subject experiments. The experiment was conducted with 12 healthy participants, and all participants performed ball-hitting tasks. Half of the participants get forceful guidance from the proposed system in the middle of the experiment. We found that the use of the proposed system improved the operator's motor performance. Furthermore, we observed a trend in which the improvement in motor performance using this system correlated with that after the washout of this system. These results suggest that the predictive-vision system effectively enhances motor accuracy to the target error in dynamic and time-constrained reaching and hitting tasks and may contribute to facilitating motor learning.
Collapse
Affiliation(s)
- Ryo Matsui
- The Development of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan
| | - Tadayoshi Aoyama
- The Development of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan.
| | - Kenji Kato
- Assistive Robot Center, National Center for Geriatrics and Gerontology, Obu, Aichi, 474-8511, Japan.
| | - Yasuhisa Hasegawa
- The Development of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan
| |
Collapse
|
11
|
Nikbakht N. More Than the Sum of Its Parts: Visual-Tactile Integration in the Behaving Rat. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:37-58. [PMID: 38270852 DOI: 10.1007/978-981-99-7611-9_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
We experience the world by constantly integrating cues from multiple modalities to form unified sensory percepts. Once familiar with multimodal properties of an object, we can recognize it regardless of the modality involved. In this chapter we will examine the case of a visual-tactile orientation categorization experiment in rats. We will explore the involvement of the cerebral cortex in recognizing objects through multiple sensory modalities. In the orientation categorization task, rats learned to examine and judge the orientation of a raised, black and white grating using touch, vision, or both. Their multisensory performance was better than the predictions of linear models for cue combination, indicating synergy between the two sensory channels. Neural recordings made from a candidate associative cortical area, the posterior parietal cortex (PPC), reflected the principal neuronal correlates of the behavioral results: PPC neurons encoded both graded information about the object and categorical information about the animal's decision. Intriguingly single neurons showed identical responses under each of the three modality conditions providing a substrate for a neural circuit in the cortex that is involved in modality-invariant processing of objects.
Collapse
Affiliation(s)
- Nader Nikbakht
- Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
12
|
Zaidel A. Multisensory Calibration: A Variety of Slow and Fast Brain Processes Throughout the Lifespan. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:139-152. [PMID: 38270858 DOI: 10.1007/978-981-99-7611-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
From before we are born, throughout development, adulthood, and aging, we are immersed in a multisensory world. At each of these stages, our sensory cues are constantly changing, due to body, brain, and environmental changes. While integration of information from our different sensory cues improves precision, this only improves accuracy if the underlying cues are unbiased. Thus, multisensory calibration is a vital and ongoing process. To meet this grand challenge, our brains have evolved a variety of mechanisms. First, in response to a systematic discrepancy between sensory cues (without external feedback) the cues calibrate one another (unsupervised calibration). Second, multisensory function is calibrated to external feedback (supervised calibration). These two mechanisms superimpose. While the former likely reflects a lower level mechanism, the latter likely reflects a higher level cognitive mechanism. Indeed, neural correlates of supervised multisensory calibration in monkeys were found in higher level multisensory cortical area VIP, but not in the relatively lower level multisensory area MSTd. In addition, even without a cue discrepancy (e.g., when experiencing stimuli from different sensory cues in series) the brain monitors supra-modal statistics of events in the environment and adapts perception cross-modally. This too comprises a variety of mechanisms, including confirmation bias to prior choices, and lower level cross-sensory adaptation. Further research into the neuronal underpinnings of the broad and diverse functions of multisensory calibration, with improved synthesis of theories is needed to attain a more comprehensive understanding of multisensory brain function.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
| |
Collapse
|
13
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
14
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
15
|
Tanaka R, Zhou B, Agrochao M, Badwan BA, Au B, Matos NCB, Clark DA. Neural mechanisms to incorporate visual counterevidence in self-movement estimation. Curr Biol 2023; 33:4960-4979.e7. [PMID: 37918398 PMCID: PMC10848174 DOI: 10.1016/j.cub.2023.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 11/04/2023]
Abstract
In selecting appropriate behaviors, animals should weigh sensory evidence both for and against specific beliefs about the world. For instance, animals measure optic flow to estimate and control their own rotation. However, existing models of flow detection can be spuriously triggered by visual motion created by objects moving in the world. Here, we show that stationary patterns on the retina, which constitute evidence against observer rotation, suppress inappropriate stabilizing rotational behavior in the fruit fly Drosophila. In silico experiments show that artificial neural networks (ANNs) that are optimized to distinguish observer movement from external object motion similarly detect stationarity and incorporate negative evidence. Employing neural measurements and genetic manipulations, we identified components of the circuitry for stationary pattern detection, which runs parallel to the fly's local motion and optic-flow detectors. Our results show how the fly brain incorporates negative evidence to improve heading stability, exemplifying how a compact brain exploits geometrical constraints of the visual world.
Collapse
Affiliation(s)
- Ryosuke Tanaka
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Baohua Zhou
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Statistics and Data Science, Yale University, New Haven, CT 06511, USA
| | - Margarida Agrochao
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA
| | - Bara A Badwan
- School of Engineering and Applied Science, Yale University, New Haven, CT 06511, USA
| | - Braedyn Au
- Department of Physics, Yale University, New Haven, CT 06511, USA
| | - Natalia C B Matos
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Damon A Clark
- Department of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511, USA; Department of Physics, Yale University, New Haven, CT 06511, USA; Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA; Quantitative Biology Institute, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
16
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
17
|
Zaidel A, Salomon R. Multisensory decisions from self to world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220335. [PMID: 37545311 PMCID: PMC10404927 DOI: 10.1098/rstb.2022.0335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 06/19/2023] [Indexed: 08/08/2023] Open
Abstract
Classic Bayesian models of perceptual inference describe how an ideal observer would integrate 'unisensory' measurements (multisensory integration) and attribute sensory signals to their origin(s) (causal inference). However, in the brain, sensory signals are always received in the context of a multisensory bodily state-namely, in combination with other senses. Moreover, sensory signals from both interoceptive sensing of one's own body and exteroceptive sensing of the world are highly interdependent and never occur in isolation. Thus, the observer must fundamentally determine whether each sensory observation is from an external (versus internal, self-generated) source to even be considered for integration. Critically, solving this primary causal inference problem requires knowledge of multisensory and sensorimotor dependencies. Thus, multisensory processing is needed to separate sensory signals. These multisensory processes enable us to simultaneously form a sense of self and form distinct perceptual decisions about the external world. In this opinion paper, we review and discuss the similarities and distinctions between multisensory decisions underlying the sense of self and those directed at acquiring information about the world. We call attention to the fact that heterogeneous multisensory processes take place all along the neural hierarchy (even in forming 'unisensory' observations) and argue that more integration of these aspects, in theory and experiment, is required to obtain a more comprehensive understanding of multisensory brain function. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Roy Salomon
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan 5290002, Israel
- Department of Cognitive Sciences, University of Haifa, Mount Carmel, Haifa 3498838, Israel
| |
Collapse
|
18
|
Maynes R, Faulkner R, Callahan G, Mims CE, Ranjan S, Stalzer J, Odegaard B. Metacognitive awareness in the sound-induced flash illusion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220347. [PMID: 37545312 PMCID: PMC10404924 DOI: 10.1098/rstb.2022.0347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/27/2023] [Indexed: 08/08/2023] Open
Abstract
Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Randolph Maynes
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Ryan Faulkner
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Grace Callahan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Callie E. Mims
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
- Psychology Department, University of South Alabama, Mobile, 36688, AL, USA
| | - Saurabh Ranjan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Justine Stalzer
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Brian Odegaard
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| |
Collapse
|
19
|
Kayser C, Park H, Heuer H. Cumulative multisensory discrepancies shape the ventriloquism aftereffect but not the ventriloquism bias. PLoS One 2023; 18:e0290461. [PMID: 37607201 PMCID: PMC10443876 DOI: 10.1371/journal.pone.0290461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Multisensory integration and recalibration are two processes by which perception deals with discrepant signals. Both are often studied in the spatial ventriloquism paradigm. There, integration is probed by the presentation of discrepant audio-visual stimuli, while recalibration manifests as an aftereffect in subsequent judgements of unisensory sounds. Both biases are typically quantified against the degree of audio-visual discrepancy, reflecting the possibility that both may arise from common underlying multisensory principles. We tested a specific prediction of this: that both processes should also scale similarly with the history of multisensory discrepancies, i.e. the sequence of discrepancies in several preceding audio-visual trials. Analyzing data from ten experiments with randomly varying spatial discrepancies we confirmed the expected dependency of each bias on the immediately presented discrepancy. And in line with the aftereffect being a cumulative process, this scaled with the discrepancies presented in at least three preceding audio-visual trials. However, the ventriloquism bias did not depend on this three-trial history of multisensory discrepancies and also did not depend on the aftereffect biases in previous trials - making these two multisensory processes experimentally dissociable. These findings support the notion that the ventriloquism bias and the aftereffect reflect distinct functions, with integration maintaining a stable percept by reducing immediate sensory discrepancies and recalibration maintaining an accurate percept by accounting for consistent discrepancies.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Hame Park
- Department of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
20
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
21
|
Kreyenmeier P, Schroeger A, Cañal-Bruland R, Raab M, Spering M. Rapid Audiovisual Integration Guides Predictive Actions. eNeuro 2023; 10:ENEURO.0134-23.2023. [PMID: 37591732 PMCID: PMC10464656 DOI: 10.1523/eneuro.0134-23.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/19/2023] [Accepted: 07/22/2023] [Indexed: 08/19/2023] Open
Abstract
Natural movements, such as catching a ball or capturing prey, typically involve multiple senses. Yet, laboratory studies on human movements commonly focus solely on vision and ignore sound. Here, we ask how visual and auditory signals are integrated to guide interceptive movements. Human observers tracked the brief launch of a simulated baseball, randomly paired with batting sounds of varying intensities, and made a quick pointing movement at the ball. Movement end points revealed systematic overestimation of target speed when the ball launch was paired with a loud versus a quiet sound, although sound was never informative. This effect was modulated by the availability of visual information; sounds biased interception when the visual presentation duration of the ball was short. Amplitude of the first catch-up saccade, occurring ∼125 ms after target launch, revealed early integration of audiovisual information for trajectory estimation. This sound-induced bias was reversed during later predictive saccades when more visual information was available. Our findings suggest that auditory and visual signals are integrated to guide interception and that this integration process must occur early at a neural site that receives auditory and visual signals within an ultrashort time span.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
| | - Anna Schroeger
- Department of Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, German Sport University Cologne, 50933 Cologne, Germany
- School of Applied Sciences, London South Bank University, London SE1 0AA, United Kingdom
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Colombia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Colombia V6T 1Z4, Canada
| |
Collapse
|
22
|
Tani K, Iio S, Kamiya M, Yoshizawa K, Shigematsu T, Fujishima I, Tanaka S. Neuroanatomy of reduced distortion of body-centred spatial coding during body tilt in stroke patients. Sci Rep 2023; 13:11853. [PMID: 37481585 PMCID: PMC10363170 DOI: 10.1038/s41598-023-38751-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 07/14/2023] [Indexed: 07/24/2023] Open
Abstract
Awareness of the direction of the body's (longitudinal) axis is fundamental for action and perception. The perceived body axis orientation is strongly biased during body tilt; however, the neural substrates underlying this phenomenon remain largely unknown. Here, we tackled this issue using a neuropsychological approach in patients with hemispheric stroke. Thirty-seven stroke patients and 20 age-matched healthy controls adjusted a visual line with the perceived body longitudinal axis when the body was upright or laterally tilted by 10 degrees. The bias of the perceived body axis caused by body tilt, termed tilt-dependent error (TDE), was compared between the groups. The TDE was significantly smaller (i.e., less affected performance by body tilt) in the stroke group (15.9 ± 15.9°) than in the control group (25.7 ± 17.1°). Lesion subtraction analysis and Bayesian lesion-symptom inference revealed that the abnormally reduced TDEs were associated with lesions in the right occipitotemporal cortex, such as the superior and middle temporal gyri. Our findings contribute to a better understanding of the neuroanatomy of body-centred spatial coding during whole-body tilt.
Collapse
Affiliation(s)
- Keisuke Tani
- Laboratory of Psychology, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, 431-3192, Japan.
- Faculty of Psychology, Otemon Gakuin University, 2-1-15 Nishi-Ai, Ibaraki, Osaka, 567-8502, Japan.
| | - Shintaro Iio
- Department of Rehabilitation, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Masato Kamiya
- Department of Rehabilitation, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Kohei Yoshizawa
- Department of Rehabilitation, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Takashi Shigematsu
- Department of Rehabilitation Medicine, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Ichiro Fujishima
- Department of Rehabilitation Medicine, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Satoshi Tanaka
- Laboratory of Psychology, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, 431-3192, Japan
| |
Collapse
|
23
|
Park J, Kim S, Kim HR, Lee J. Prior expectation enhances sensorimotor behavior by modulating population tuning and subspace activity in sensory cortex. SCIENCE ADVANCES 2023; 9:eadg4156. [PMID: 37418521 PMCID: PMC10328413 DOI: 10.1126/sciadv.adg4156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Prior knowledge facilitates our perception and goal-directed behaviors, particularly when sensory input is lacking or noisy. However, the neural mechanisms underlying the improvement in sensorimotor behavior by prior expectations remain unknown. In this study, we examine the neural activity in the middle temporal (MT) area of visual cortex while monkeys perform a smooth pursuit eye movement task with prior expectation of the visual target's motion direction. Prior expectations discriminately reduce the MT neural responses depending on their preferred directions, when the sensory evidence is weak. This response reduction effectively sharpens neural population direction tuning. Simulations with a realistic MT population demonstrate that sharpening the tuning can explain the biases and variabilities in smooth pursuit, suggesting that neural computations in the sensory area alone can underpin the integration of prior knowledge and sensory evidence. State-space analysis further supports this by revealing neural signals of prior expectations in the MT population activity that correlate with behavioral changes.
Collapse
Affiliation(s)
- JeongJun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - HyungGoo R. Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
24
|
Townsend B, Legere JK, von Mohrenschildt M, Shedden JM. Stimulus Onset Asynchrony Affects Weighting-related Event-related Spectral Power in Self-motion Perception. J Cogn Neurosci 2023; 35:1092-1107. [PMID: 37043240 DOI: 10.1162/jocn_a_01994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2023]
Abstract
Self-motion perception relies primarily on the integration of the visual, vestibular, proprioceptive, and somatosensory systems. There is a gap in understanding how a temporal lag between visual and vestibular motion cues affects visual-vestibular weighting during self-motion perception. The beta band is an index of visual-vestibular weighting, in that robust beta event-related synchronization (ERS) is associated with visual weighting bias, and robust beta event-related desynchronization is associated with vestibular weighting bias. The present study examined modulation of event-related spectral power during a heading judgment task in which participants attended to either visual (optic flow) or physical (inertial cues stimulating the vestibular, proprioceptive and somatosensory systems) motion cues from a motion simulator mounted on a MOOG Stewart Platform. The temporal lag between the onset of visual and physical motion cues was manipulated to produce three lag conditions: simultaneous onset, visual before physical motion onset, and physical before visual motion onset. There were two main findings. First, we demonstrated that when the attended motion cue was presented before an ignored cue, the power of beta associated with the attended modality was greater than when visual-vestibular cues were presented simultaneously or when the ignored cue was presented first. This was the case for beta ERS when the visual-motion cue was attended to, and beta event-related desynchronization when the physical-motion cue was attended to. Second, we tested whether the power of feature-binding gamma ERS (demonstrated in audiovisual and visual-tactile integration studies) increased when the visual-vestibular cues were presented simultaneously versus with temporal asynchrony. We did not observe an increase in gamma ERS when cues were presented simultaneously, suggesting that electrophysiological markers of visual-vestibular binding differ from markers of audiovisual and visual-tactile integration. All event-related spectral power reported in this study were generated from dipoles projecting from the left and right motor areas, based on the results of Measure Projection Analysis.
Collapse
|
25
|
Bertaccini R, Ippolito G, Tarasi L, Zazio A, Stango A, Bortoletto M, Romei V. Rhythmic TMS as a Feasible Tool to Uncover the Oscillatory Signatures of Audiovisual Integration. Biomedicines 2023; 11:1746. [PMID: 37371840 DOI: 10.3390/biomedicines11061746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/09/2023] [Accepted: 06/15/2023] [Indexed: 06/29/2023] Open
Abstract
Multisensory integration is quintessential to adaptive behavior, with clinical populations showing significant impairments in this domain, most notably hallucinatory reports. Interestingly, altered cross-modal interactions have also been reported in healthy individuals when engaged in tasks such as the Sound-Induced Flash-Illusion (SIFI). The temporal dynamics of the SIFI have been recently tied to the speed of occipital alpha rhythms (IAF), with faster oscillations entailing reduced temporal windows within which the illusion is experienced. In this regard, entrainment-based protocols have not yet implemented rhythmic transcranial magnetic stimulation (rhTMS) to causally test for this relationship. It thus remains to be evaluated whether rhTMS-induced acoustic and somatosensory sensations may not specifically interfere with the illusion. Here, we addressed this issue by asking 27 volunteers to perform a SIFI paradigm under different Sham and active rhTMS protocols, delivered over the occipital pole at the IAF. Although TMS has been proven to act upon brain tissues excitability, results show that the SIFI occurred for both Sham and active rhTMS, with the illusory rate not being significantly different between baseline and stimulation conditions. This aligns with the discrete sampling hypothesis, for which alpha amplitude modulation, known to reflect changes in cortical excitability, should not account for changes in the illusory rate. Moreover, these findings highlight the viability of rhTMS-based interventions as a means to probe the neuroelectric signatures of illusory and hallucinatory audiovisual experiences, in healthy and neuropsychiatric populations.
Collapse
Affiliation(s)
- Riccardo Bertaccini
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Giuseppe Ippolito
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
| | - Luca Tarasi
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
| | - Agnese Zazio
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Antonietta Stango
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Marta Bortoletto
- Neurophysiology Lab., IRCCS Istituto Centro San Giovanni di Dio Fatebenefratelli, 25125 Brescia, Italy
| | - Vincenzo Romei
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum-Università di Bologna, 47521 Cesena, Italy
- Facultad de Lenguas y Educación, Universidad Antonio de Nebrija, 28015 Madrid, Spain
| |
Collapse
|
26
|
Zou W, Li C, Huang H. Ensemble perspective for understanding temporal credit assignment. Phys Rev E 2023; 107:024307. [PMID: 36932505 DOI: 10.1103/physreve.107.024307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 01/24/2023] [Indexed: 06/18/2023]
Abstract
Recurrent neural networks are widely used for modeling spatiotemporal sequences in both nature language processing and neural population dynamics. However, understanding the temporal credit assignment is hard. Here, we propose that each individual connection in the recurrent computation is modeled by a spike and slab distribution, rather than a precise weight value. We then derive the mean-field algorithm to train the network at the ensemble level. The method is then applied to classify handwritten digits when pixels are read in sequence, and to the multisensory integration task that is a fundamental cognitive function of animals. Our model reveals important connections that determine the overall performance of the network. The model also shows how spatiotemporal information is processed through the hyperparameters of the distribution, and moreover reveals distinct types of emergent neural selectivity. To provide a mechanistic analysis of the ensemble learning, we first derive an analytic solution of the learning at the infinitely large network limit. We then carry out a low-dimensional projection of both neural and synaptic dynamics, analyze symmetry breaking in the parameter space, and finally demonstrate the role of stochastic plasticity in the recurrent computation. Therefore, our study sheds light on mechanisms of how weight uncertainty impacts the temporal credit assignment in recurrent neural networks from the ensemble perspective.
Collapse
Affiliation(s)
- Wenxuan Zou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Chan Li
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
- Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
27
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
28
|
Domenici N, Sanguineti V, Morerio P, Campus C, Del Bue A, Gori M, Murino V. Computational modeling of human multisensory spatial representation by a neural architecture. PLoS One 2023; 18:e0280987. [PMID: 36888612 PMCID: PMC9994749 DOI: 10.1371/journal.pone.0280987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 01/12/2023] [Indexed: 03/09/2023] Open
Abstract
Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans' ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion as a benchmark to evaluate its phenomenological plausibility. Our model closely replicated human perceptual behavior, proving a truthful approximation of the brain's ability to develop audiovisual spatial representations. Considering its ability to model audiovisual performance in a spatial localization task, we release our model in conjunction with the dataset we recorded for its validation. We believe it will be a powerful tool to model and better understand multisensory integration processes in experimental and rehabilitation environments.
Collapse
Affiliation(s)
- Nicola Domenici
- Uvip, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Genova, Genoa, Italy
- * E-mail:
| | - Valentina Sanguineti
- University of Genova, Genoa, Italy
- Pavis, Pattern Analysis & Computer Vision, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Pietro Morerio
- Pavis, Pattern Analysis & Computer Vision, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Campus
- Uvip, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Alessio Del Bue
- Visual Geometry and Modelling, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- Uvip, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Vittorio Murino
- Pavis, Pattern Analysis & Computer Vision, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Verona, Verona, Italy
- Huawei Technologies Ltd., Ireland Research Center, Dublin, Ireland
| |
Collapse
|
29
|
Tseng CH, Chow HM, Spillmann L, Oxner M, Sakurai K. Body Pitch Together With Translational Body Motion Biases the Subjective Haptic Vertical. Multisens Res 2022; 36:1-29. [PMID: 36731530 DOI: 10.1163/22134808-bja10086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 11/15/2022] [Indexed: 12/27/2022]
Abstract
Accurate perception of verticality is critical for postural maintenance and successful physical interaction with the world. Although previous research has examined the independent influences of body orientation and self-motion under well-controlled laboratory conditions, these factors are constantly changing and interacting in the real world. In this study, we examine the subjective haptic vertical in a real-world scenario. Here, we report a bias of verticality perception in a field experiment on the Hong Kong Peak Tram as participants traveled on a slope ranging from 6° to 26°. Mean subjective haptic vertical (SHV) increased with slope by as much as 15°, regardless of whether the eyes were open (Experiment 1) or closed (Experiment 2). Shifting the body pitch by a fixed degree in an effort to compensate for the mountain slope failed to reduce the verticality bias (Experiment 3). These manipulations separately rule out visual and vestibular inputs about absolute body pitch as contributors to our observed bias. Observations collected on a tram traveling on level ground (Experiment 4A) or in a static dental chair with a range of inclinations similar to those encountered on the mountain tram (Experiment 4B) showed no significant deviation of the subjective vertical from gravity. We conclude that the SHV error is due to a combination of large, dynamic body pitch and translational motion. These observations made in a real-world scenario represent an incentive to neuroscientists and aviation experts alike for studying perceived verticality under field conditions and raising awareness of dangerous misperceptions of verticality when body pitch and translational self-motion come together.
Collapse
Affiliation(s)
- Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, 980-8577, Japan
| | - Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, E3B 5G3, Canada
| | - Lothar Spillmann
- Neurology Clinic, University of Freiburg, 79106 Freiburg, Germany
| | - Matt Oxner
- Wilhelm Wundt Institute for Psychology, University of Leipzig, 04109 Leipzig, Germany
| | - Kenzo Sakurai
- Department of Human Science, Tohoku Gakuin University, Sendai, 981-3193, Japan
| |
Collapse
|
30
|
Bill J, Gershman SJ, Drugowitsch J. Visual motion perception as online hierarchical inference. Nat Commun 2022; 13:7403. [PMID: 36456546 PMCID: PMC9715570 DOI: 10.1038/s41467-022-34805-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/07/2022] [Indexed: 12/03/2022] Open
Abstract
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for future psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates targeted experiments to reveal the neural representations of latent structure.
Collapse
Affiliation(s)
- Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA. .,Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Samuel J Gershman
- Department of Psychology, Harvard University, Cambridge, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA.,Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.,Center for Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
31
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
32
|
Combining Conformist and Payoff Bias in Cultural Evolution : An Integrated Model for Human Decision-Making. HUMAN NATURE (HAWTHORNE, N.Y.) 2022; 33:463-484. [PMID: 36515860 DOI: 10.1007/s12110-022-09435-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/21/2022] [Indexed: 12/15/2022]
Abstract
Most research on transmission biases in cultural evolution has treated different biases as distinct strategies. Here I present a model that combines both frequency dependent bias (including conformist bias) and payoff bias in a single decision-making calculus and show that such an integrated learning strategy may be superior to relying on either bias alone. Natural selection may operate on humans' relative dependence on frequency and payoff information, but both are likely to contribute to the spread of variants with high payoffs. Importantly, the magnitude of conformist bias affects the evolutionary dynamics, and I show that an intermediate level of conformity may be most adaptive and may spontaneously evolve as it resists the invasion of low-payoff variants yet enables the fixation of high-payoff variants in the population.
Collapse
|
33
|
Giret N, Rolland M, Del Negro C. Multisensory processes in birds: from single neurons to the influence of social interactions and sensory loss. Neurosci Biobehav Rev 2022; 143:104942. [DOI: 10.1016/j.neubiorev.2022.104942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 10/14/2022] [Accepted: 10/31/2022] [Indexed: 11/09/2022]
|
34
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback (“correct”/“incorrect”) on 900 training trials. Post-training, participants’ biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- *Correspondence: Jennifer L. Campos,
| |
Collapse
|
35
|
Ren Q, Marshall AC, Kaiser J, Schütz-Bosbach S. Multisensory Integration of Anticipated Cardiac Signals with Visual Targets Affects Their Detection among Multiple Visual Stimuli. Neuroimage 2022; 262:119549. [DOI: 10.1016/j.neuroimage.2022.119549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
|
36
|
Jia J, Wang T, Chen S, Ding N, Fang F. Ensemble size perception: Its neural signature and the role of global interaction over individual items. Neuropsychologia 2022; 173:108290. [PMID: 35697088 DOI: 10.1016/j.neuropsychologia.2022.108290] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 06/02/2022] [Accepted: 06/07/2022] [Indexed: 10/18/2022]
Abstract
To efficiently process complex visual scenes, the visual system often summarizes statistical information across individual items and represents them as an ensemble. However, due to the lack of techniques to disentangle the representation of the ensemble from that of the individual items constituting the ensemble, whether there exists a specialized neural mechanism for ensemble processing and how ensemble perception is computed in the brain remain unknown. To address these issues, we used a frequency-tagging EEG approach to track brain responses to periodically updated ensemble sizes. Neural responses tracking the ensemble size were detected in parieto-occipital electrodes, revealing a global and specialized neural mechanism of ensemble size perception. We then used the temporal response function to isolate neural responses to the individual sizes and their interactions. Notably, while the individual sizes and their local and global interactions were encoded in the EEG signals, only the global interaction contributed directly to the ensemble size perception. Finally, distributed attention to the global stimulus pattern enhanced the neural signature of the ensemble size, mainly by modulating the neural representation of the global interaction between all individual sizes. These findings advocate a specialized, global neural mechanism of ensemble size perception and suggest that global interaction between individual items contributes to ensemble perception.
Collapse
Affiliation(s)
- Jianrong Jia
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, 311121, China; Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Tongyu Wang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, 311121, China; Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Siqi Chen
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, 311121, China; Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, 311121, China; Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou, 311121, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871, China; Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, 100871, China.
| |
Collapse
|
37
|
Pesnot Lerousseau J, Parise CV, Ernst MO, van Wassenhove V. Multisensory correlation computations in the human brain identified by a time-resolved encoding model. Nat Commun 2022; 13:2489. [PMID: 35513362 PMCID: PMC9072402 DOI: 10.1038/s41467-022-29687-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/14/2022] [Indexed: 11/09/2022] Open
Abstract
Neural mechanisms that arbitrate between integrating and segregating multisensory information are essential for complex scene analysis and for the resolution of the multisensory correspondence problem. However, these mechanisms and their dynamics remain largely unknown, partly because classical models of multisensory integration are static. Here, we used the Multisensory Correlation Detector, a model that provides a good explanatory power for human behavior while incorporating dynamic computations. Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one modality was leading the other (temporal order), while being recorded with magnetoencephalography. First, we confirm that the Multisensory Correlation Detector explains causal inference and temporal order behavioral judgments well. Second, we found strong fits of brain activity to the two outputs of the Multisensory Correlation Detector in temporo-parietal cortices. Finally, we report an asymmetry in the goodness of the fits, which were more reliable during the causal inference task than during the temporal order judgment task. Overall, our results suggest the existence of multisensory correlation detectors in the human brain, which explain why and how causal inference is strongly driven by the temporal correlation of multisensory signals.
Collapse
Affiliation(s)
- Jacques Pesnot Lerousseau
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France. .,Applied Cognitive Psychology, Ulm University, Ulm, Germany. .,Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France.
| | | | - Marc O Ernst
- Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, CEA DRF/Joliot, INSERM, CNRS, Université Paris-Saclay, NeuroSpin, 91191, Gif/Yvette, France
| |
Collapse
|
38
|
Tarnutzer AA, Duarte da Costa V, Baumann D, Hemm S. Heading Direction Is Significantly Biased by Preceding Whole-Body Roll-Orientation While Lying. Front Neurol 2022; 13:868144. [PMID: 35509993 PMCID: PMC9058079 DOI: 10.3389/fneur.2022.868144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 03/18/2022] [Indexed: 12/05/2022] Open
Abstract
Background After a prolonged static whole-body roll-tilt, a significant bias of the internal estimates of the direction of gravity has been observed when assessing the subjective visual vertical. Objective We hypothesized that this post-tilt bias represents a more general phenomenon, broadly affecting spatial orientation and navigation. Specifically, we predicted that after the prolonged roll-tilt to either side perceived straight-ahead would also be biased. Methods Twenty-five healthy participants were asked to rest in three different lying positions (supine, right-ear-down, and left-ear-down) for 5 min (“adaptation period”) prior to walking straight-ahead blindfolded for 2 min. Walking was recorded with the inertial measurement unit sensors attached to different body locations and with sensor shoe insoles. The raw data was segmented with a gait–event detection method. The Heading direction was determined and linear mixed-effects models were used for statistical analyses. Results A significant bias in heading into the direction of the previous roll-tilt position was observed in the post-adaptation trials. This bias was identified in both measurement systems and decreased again over the 2-min walking period. Conclusions The bias observed further confirms the influence of prior knowledge on spatial orientation and navigation. Specifically, it underlines the broad impact of a shifting internal estimate of direction of gravity over a range of distinct paradigms, illustrating similar decay time constants. In the broader context, the observed bias in perceived straight-ahead emphasizes that getting up in the morning after a good night's sleep is a vulnerable period, with an increased risk of falls and fall-related injuries due to non-availability of optimally tuned internal estimates of the direction of gravity and the direction of straight-ahead.
Collapse
Affiliation(s)
- Alexander Andrea Tarnutzer
- Department of Neurology, Cantonal Hospital of Baden, Baden, Switzerland
- Faculty of Medicine, University of Zurich, Zurich, Switzerland
- *Correspondence: Alexander Andrea Tarnutzer
| | - Vasco Duarte da Costa
- School of Life Sciences, Institute for Medical Engineering and Medical Informatics, University of Applied Sciences and Arts Northwestern Switzerland, Muttenz, Switzerland
| | - Denise Baumann
- School of Life Sciences, Institute for Medical Engineering and Medical Informatics, University of Applied Sciences and Arts Northwestern Switzerland, Muttenz, Switzerland
| | - Simone Hemm
- School of Life Sciences, Institute for Medical Engineering and Medical Informatics, University of Applied Sciences and Arts Northwestern Switzerland, Muttenz, Switzerland
| |
Collapse
|
39
|
Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction. J Neurosci 2022; 42:2344-2355. [PMID: 35091504 PMCID: PMC8936614 DOI: 10.1523/jneurosci.0861-21.2022] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/29/2021] [Accepted: 01/02/2022] [Indexed: 12/16/2022] Open
Abstract
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.
Collapse
|
40
|
Sou KL, Say A, Xu H. Unity Assumption in Audiovisual Emotion Perception. Front Neurosci 2022; 16:782318. [PMID: 35310087 PMCID: PMC8931414 DOI: 10.3389/fnins.2022.782318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/09/2022] [Indexed: 11/29/2022] Open
Abstract
We experience various sensory stimuli every day. How does this integration occur? What are the inherent mechanisms in this integration? The “unity assumption” proposes a perceiver’s belief of unity in individual unisensory information to modulate the degree of multisensory integration. However, this has yet to be verified or quantified in the context of semantic emotion integration. In the present study, we investigate the ability of subjects to judge the intensities and degrees of similarity in faces and voices of two emotions (angry and happy). We found more similar stimulus intensities to be associated with stronger likelihoods of the face and voice being integrated. More interestingly, multisensory integration in emotion perception was observed to follow a Gaussian distribution as a function of the emotion intensity difference between the face and voice—the optimal cut-off at about 2.50 points difference on a 7-point Likert scale. This provides a quantitative estimation of the multisensory integration function in audio-visual semantic emotion perception with regards to stimulus intensity. Moreover, to investigate the variation of multisensory integration across the population, we examined the effects of personality and autistic traits of participants. Here, we found no correlation of autistic traits with unisensory processing in a nonclinical population. Our findings shed light on the current understanding of multisensory integration mechanisms.
Collapse
Affiliation(s)
- Ka Lon Sou
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- Humanities, Arts and Social Sciences, Singapore University of Technology and Design, Singapore, Singapore
| | - Ashley Say
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
| | - Hong Xu
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- *Correspondence: Hong Xu,
| |
Collapse
|
41
|
Ichikawa K, Kataoka A. Dynamical Mechanism of Sampling-Based Probabilistic Inference under Probabilistic Population Codes. Neural Comput 2022; 34:804-827. [PMID: 35026031 DOI: 10.1162/neco_a_01477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 11/04/2021] [Indexed: 11/04/2022]
Abstract
Animals make efficient probabilistic inferences based on uncertain and noisy information from the outside environment. It is known that probabilistic population codes, which have been proposed as a neural basis for encoding probability distributions, allow general neural networks (NNs) to perform near-optimal point estimation. However, the mechanism of sampling-based probabilistic inference has not been clarified. In this study, we trained two types of artificial NNs, feedforward NN (FFNN) and recurrent NN (RNN), to perform sampling-based probabilistic inference. Then we analyzed and compared their mechanisms of sampling. We found that sampling in RNN was performed by a mechanism that efficiently uses the properties of dynamical systems, unlike FFNN. In addition, we found that sampling in RNNs acted as an inductive bias, enabling a more accurate estimation than in maximum a posteriori estimation. These results provide important arguments for discussing the relationship between dynamical systems and information processing in NNs.
Collapse
Affiliation(s)
- Kohei Ichikawa
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo 153-0041, Japan, and ACES, Bunkyo-ku, Tokyo-to 223-0034, Japan
| | - Asaki Kataoka
- Graduate School of Arts and Sciences, University of Tokyo, Tokyo 153-0041, Japan, and ACES, Bunkyo-ku, Tokyo-to 223-0034, Japan
| |
Collapse
|
42
|
Rapid cross-sensory adaptation of self-motion perception. Cortex 2022; 148:14-30. [DOI: 10.1016/j.cortex.2021.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 10/24/2021] [Accepted: 11/16/2021] [Indexed: 11/19/2022]
|
43
|
Neurocomputational mechanisms underlying cross-modal associations and their influence on perceptual decisions. Neuroimage 2021; 247:118841. [PMID: 34952232 PMCID: PMC9127393 DOI: 10.1016/j.neuroimage.2021.118841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 12/07/2021] [Accepted: 12/19/2021] [Indexed: 12/02/2022] Open
Abstract
When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent ‘early’ sensory processing benefits, or ‘late’ post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an ‘Early’ component (∼100–110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli, and a ‘Late’ component (∼330–340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of ‘early’ sensory processing, as well as ‘late’ post-sensory neural representations of associative congruency to perceptual decision formation.
Collapse
|
44
|
Kasuga S, Crevecoeur F, Cross KP, Balalaie P, Scott SH. Integration of proprioceptive and visual feedback during online control of reaching. J Neurophysiol 2021; 127:354-372. [PMID: 34907796 PMCID: PMC8794063 DOI: 10.1152/jn.00639.2020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Visual and proprioceptive feedback both contribute to perceptual decisions, but it remains unknown how these feedback signals are integrated together or consider factors such as delays and variance during online control. We investigated this question by having participants reach to a target with randomly applied mechanical and/or visual disturbances. We observed that the presence of visual feedback during a mechanical disturbance did not increase the size of the muscle response significantly but did decrease variance, consistent with a dynamic Bayesian integration model. In a control experiment, we verified that vision had a potent influence when mechanical and visual disturbances were both present but opposite in sign. These results highlight a complex process for multisensory integration, where visual feedback has a relatively modest influence when the limb is mechanically disturbed, but a substantial influence when visual feedback becomes misaligned with the limb. NEW & NOTEWORTHY Visual feedback is more accurate, but proprioceptive feedback is faster. How should you integrate these sources of feedback to guide limb movement? As predicted by dynamic Bayesian models, the size of the muscle response to a mechanical disturbance was essentially the same whether visual feedback was present or not. Only under artificial conditions, such as when shifting the position of a cursor representing hand position, can one observe a muscle response from visual feedback.
Collapse
Affiliation(s)
- Shoko Kasuga
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Frédéric Crevecoeur
- Institute of Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Kevin Patrick Cross
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Parsa Balalaie
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada.,Department of Medicine, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
45
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
46
|
VanGilder P, Phataraphruk K, Buneo CA. Multi-session Analysis of Movement Variability While Reaching in a Virtual Environment . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6651-6654. [PMID: 34892633 DOI: 10.1109/embc46164.2021.9630728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The acquisition of neurophysiological data during awake, behaving animal experiments typically involves experimental sessions lasting several days to weeks. Therefore, it is important to understand natural fluctuations in behavioral performance over such periods. Here we quantified patterns of movement variability for reaches performed by two monkeys across five daily experimental sessions. The monkeys were trained to move in an immersive virtual reality (VR) environment that was designed to resemble the experimental room. Visual feedback of the limb was provided using VR avatar arms that were controlled through a reflective marker-based motion capture system. Additionally, tactile cues were provided in the form of physical reach targets. Spatial variability was characterized at early (peak acceleration) and late (movement endpoint) kinematic landmarks. We found that the magnitude of variability was generally larger at peak acceleration than at the endpoint but was relatively consistent across days and within animals. The spatial characteristics of variability were also generally highly consistent at peak acceleration both within and between animals but were noticeably less so at the endpoint. The results highlight the benefits of using early kinematic landmarks such as peak acceleration for quantifying movement variability in reaching studies involving animals.
Collapse
|
47
|
Skerritt-Davis B, Elhilali M. Neural Encoding of Auditory Statistics. J Neurosci 2021; 41:6726-6739. [PMID: 34193552 PMCID: PMC8336711 DOI: 10.1523/jneurosci.1887-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 05/19/2021] [Accepted: 05/26/2021] [Indexed: 11/21/2022] Open
Abstract
The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences.SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.
Collapse
|
48
|
Hu DZ, Wen K, Chen LH, Yu C. Perceptual learning evidence for supramodal representation of stimulus orientation at a conceptual level. Vision Res 2021; 187:120-128. [PMID: 34252727 DOI: 10.1016/j.visres.2021.06.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 06/09/2021] [Accepted: 06/16/2021] [Indexed: 11/28/2022]
Abstract
When stimulus inputs from different senses are integrated to form a coherent percept, inputs from a more precise sense are typically more dominant than those from a less precise sense. Furthermore, we hypothesized that some basic stimulus features, such as orientation, can be supramodal-represented at a conceptual level that is independent of the original modality precision. This hypothesis was tested with perceptual learning experiments. Specifically, participants practiced coarser tactile orientation discrimination, which initially had little impact on finer visual orientation discrimination (tactile vs. visual orientation thresholds = 3:1). However, if participants also practiced a functionally orthogonal visual contrast discrimination task in a double training design, their visual orientation performance was improved at both tactile-trained and untrained orientations, as much as through direct visual orientation training. The complete tactile-to-visual learning transfer is consistent with a conceptual supramodal representation of orientation unconstrained by original modality precision, likely through certain forms of input standardization. Moreover, this conceptual supramodal representation, when improved through perceptual learning in one sense, can in turn facilitate orientation discrimination in an untrained sense.
Collapse
Affiliation(s)
- Ding-Zhi Hu
- PKU-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Kai Wen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Li-Han Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
| | - Cong Yu
- PKU-Tsinghua Center for Life Sciences, Peking University, Beijing, China; School of Psychological and Cognitive Sciences, Peking University, Beijing, China; IDG-McGovern Institute for Brain Research, Peking University, Beijing, China.
| |
Collapse
|
49
|
Abstract
There are two competing views on how humans make decisions under uncertainty. Bayesian decision theory posits that humans optimize their behavior by establishing and integrating internal models of past sensory experiences (priors) and decision outcomes (cost functions). An alternative hypothesis posits that decisions are optimized through trial and error without explicit internal models for priors and cost functions. To distinguish between these possibilities, we introduce a paradigm that probes the sensitivity of humans to transitions between prior-cost pairs that demand the same optimal policy (metamers) but distinct internal models. We demonstrate the utility of our approach in two experiments that were classically explained by Bayesian theory. Our approach validates the Bayesian learning strategy in an interval timing task but not in a visuomotor rotation task. More generally, our work provides a domain-general approach for testing the circumstances under which humans explicitly implement model-based Bayesian computations.
Collapse
|
50
|
Abstract
Even for a stereotyped task, sensorimotor behavior is generally variable due to noise, redundancy, adaptability, learning or plasticity. The sources and significance of different kinds of behavioral variability have attracted considerable attention in recent years. However, the idea that part of this variability depends on unique individual strategies has been explored to a lesser extent. In particular, the notion of style recurs infrequently in the literature on sensorimotor behavior. In general use, style refers to a distinctive manner or custom of behaving oneself or of doing something, especially one that is typical of a person, group of people, place, context, or period. The application of the term to the domain of perceptual and motor phenomenology opens new perspectives on the nature of behavioral variability, perspectives that are complementary to those typically considered in the studies of sensorimotor variability. In particular, the concept of style may help toward the development of personalised physiology and medicine by providing markers of individual behaviour and response to different stimuli or treatments. Here, we cover some potential applications of the concept of perceptual-motor style to different areas of neuroscience, both in the healthy and the diseased. We prefer to be as general as possible in the types of applications we consider, even at the expense of running the risk of encompassing loosely related studies, given the relative novelty of the introduction of the term perceptual-motor style in neurosciences.
Collapse
Affiliation(s)
- Pierre-Paul Vidal
- CNRS, SSA, ENS Paris Saclay, Université de Paris, Centre Borelli, 75005 Paris, France
- Institute of Information and Control, Hangzhou Dianzi University, Hangzhou, China
| | - Francesco Lacquaniti
- Department of Systems Medicine, Center of Space Biomedicine, University of Rome Tor Vergata, 00133 Rome, Italy
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation IRCCS, 00179 Rome, Italy
| |
Collapse
|