1
|
Carandini M. Sensory choices as logistic classification. Neuron 2024; 112:2854-2868.e1. [PMID: 39013468 PMCID: PMC11377159 DOI: 10.1016/j.neuron.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 05/13/2024] [Accepted: 06/19/2024] [Indexed: 07/18/2024]
Abstract
Logistic classification is a simple way to make choices based on a set of factors: give each factor a weight, sum the results, and use the sum to set the log odds of a random draw. This operation is known to describe human and animal choices based on value (economic decisions). There is increasing evidence that it also describes choices based on sensory inputs (perceptual decisions), presented across sensory modalities (multisensory integration) and combined with non-sensory factors such as prior probability, expected value, overall motivation, and recent actions. Logistic classification can also capture the effects of brain manipulations such as local inactivations. The brain may implement it by thresholding stochastic inputs (as in signal detection theory) acquired over time (as in the drift diffusion model). It is the optimal strategy under certain conditions, and the brain appears to use it as a heuristic in a wider set of conditions.
Collapse
Affiliation(s)
- Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London WC1 6BT, UK.
| |
Collapse
|
2
|
Jeong J, Nam SM, Seo H. Impact of sensory modality and tempo in motor timing. Front Psychol 2024; 15:1419135. [PMID: 39184937 PMCID: PMC11341454 DOI: 10.3389/fpsyg.2024.1419135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 07/31/2024] [Indexed: 08/27/2024] Open
Abstract
Background Accurate motor timing requires the coordinated control of actions in response to external stimuli. Over the past few years, several studies have investigated the effect of sensory input on motor timing; however, the evidence remains conflicting. The purpose of this study was to examine the impact of sensory modality and tempo on the accuracy of timed movements and explore strategies for enhancing motor timing. Methods Participants (n = 30) performed synchronization and adaptation circle drawing tasks in virtual reality. In Experiment 1, participants synchronized circle drawing with repeated stimuli based on sensory modalities (auditory, visual, tactile, audio-visual, audio-tactile, and visual-tactile) and tempos (20, 30, and 60 bpm). In Experiment 2, we examined timing adaptation in circle drawing tasks under conditions of unexpected tempo changes, whether increased or decreased. Results A significant interaction effect between modality and tempo was observed in the comparison of timing accuracy. Tactile stimuli exhibited significantly higher timing accuracy at 60 bpm, whereas auditory stimuli demonstrated a peak accuracy at 30 bpm. The analysis revealed a significantly larger timing error when adapting to changes in the tempo-down condition compared with the tempo-up condition. Discussion Through Experiment 1, we found that sensory modality impacts motor timing differently depending on the tempo, with tactile modality being effective at a faster tempo and auditory modality being beneficial at a moderate tempo. Additionally, Experiment 2 revealed that adapting to changes by correcting timing errors is more challenging with decreasing tempo than with increasing tempo. Our findings suggest that motor timing is intricately influenced by sensory modality and tempo variation. Therefore, to enhance the motor timing, a comprehensive understanding of these factors and their applications is imperative.
Collapse
Affiliation(s)
- Jaeuk Jeong
- Department of Physical Education, Seoul National University, Seoul, Republic of Korea
| | - Soo Mi Nam
- Division of Sports Science, Hanyang University, Ansan, Republic of Korea
| | - Hyejin Seo
- Department of Physical Education, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
3
|
Bolam J, Diaz JA, Andrews M, Coats RO, Philiastides MG, Astill SL, Delis I. A drift diffusion model analysis of age-related impact on multisensory decision-making processes. Sci Rep 2024; 14:14895. [PMID: 38942761 PMCID: PMC11213863 DOI: 10.1038/s41598-024-65549-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 06/20/2024] [Indexed: 06/30/2024] Open
Abstract
Older adults (OAs) are typically slower and/or less accurate in forming perceptual choices relative to younger adults. Despite perceptual deficits, OAs gain from integrating information across senses, yielding multisensory benefits. However, the cognitive processes underlying these seemingly discrepant ageing effects remain unclear. To address this knowledge gap, 212 participants (18-90 years old) performed an online object categorisation paradigm, whereby age-related differences in Reaction Times (RTs) and choice accuracy between audiovisual (AV), visual (V), and auditory (A) conditions could be assessed. Whereas OAs were slower and less accurate across sensory conditions, they exhibited greater RT decreases between AV and V conditions, showing a larger multisensory benefit towards decisional speed. Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants' behaviour to probe age-related impacts on the latent multisensory decision formation processes. For OAs, HDDM demonstrated slower evidence accumulation rates across sensory conditions coupled with increased response caution for AV trials of higher difficulty. Notably, for trials of lower difficulty we found multisensory benefits in evidence accumulation that increased with age, but not for trials of higher difficulty, in which increased response caution was instead evident. Together, our findings reconcile age-related impacts on multisensory decision-making, indicating greater multisensory evidence accumulation benefits with age underlying enhanced decisional speed.
Collapse
Affiliation(s)
- Joshua Bolam
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
- Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| | - Jessica A Diaz
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
- School of Social Sciences, Birmingham City University, West Midlands, B15 3HE, UK
| | - Mark Andrews
- School of Social Sciences, Nottingham Trent University, Nottinghamshire, NG1 4FQ, UK
| | - Rachel O Coats
- School of Psychology, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Marios G Philiastides
- School of Neuroscience and Psychology, University of Glasgow, Lanarkshire, G12 8QB, UK
| | - Sarah L Astill
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
| |
Collapse
|
4
|
Carandini M. Sensory choices as logistic classification. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.17.576029. [PMID: 38979189 PMCID: PMC11230223 DOI: 10.1101/2024.01.17.576029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Logistic classification is a simple way to make choices based on a set of factors: give each factor a weight, sum the results, and use the sum to set the log odds of a random draw. This operation is known to describe human and animal choices based on value (economic decisions). There is increasing evidence that it also describes choices based on sensory inputs (perceptual decisions), presented across sensory modalities (multisensory integration) and combined with non-sensory factors such as prior probability, expected value, overall motivation, and recent actions. Logistic classification can also capture the effects of brain manipulations such as local inactivations. The brain may implement by thresholding stochastic inputs (as in signal detection theory) acquired over time (as in the drift diffusion model). It is the optimal strategy under certain conditions, and the brain appears to use it as a heuristic in a wider set of conditions.
Collapse
Affiliation(s)
- Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London WC1 6BT, UK
| |
Collapse
|
5
|
Lee DH, Kim JS, Ryun S, Chung CK. Discrete tactile feature comparison subprocess in human brain during a decision-making process. Cortex 2024; 171:383-396. [PMID: 38101274 DOI: 10.1016/j.cortex.2023.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 10/03/2023] [Accepted: 11/02/2023] [Indexed: 12/17/2023]
Abstract
From sensory input to motor action, encoded sensory features flow sequentially along cortical networks for decision-making. Despite numerous studies probing the decision-making process, the subprocess that compares encoded sensory features before making a decision has not been fully elucidated in humans. In this study, we investigated sensory feature comparison by presenting two different tasks (a discrimination task, in which participants made decisions by comparing two sequential tactile stimuli; and a detection task, in which participants responded to the second tactile stimulus in two sequential stimuli) to epilepsy patients while recording electrocorticography (ECoG). By comparing tactile-specific gamma band (30-200 Hz) power between the two tasks, the decision-making process was divided into three subprocesses-categorization, comparison, and decision-consistent with a previous study (Heekeren et al., 2004). These subprocesses occurred sequentially in the dorsolateral prefrontal cortex, premotor cortex, secondary somatosensory cortex, and parietal lobe. Gamma power showed two different patterns of correlation with response time. In the inferior parietal lobule (IPL), there was a negative correlation. This means that as gamma power increased, response time decreased. In the secondary somatosensory cortex (S2), there was a positive correlation. Here, as gamma power increased, response time also increased. These results indicate that the IPL and S2 encode tactile feature comparison differently. Our connectivity analysis showed that the S2 transmitted tactile information to the IPL. Our findings suggest that multiple areas in the parietal lobe encode sensory feature comparison differently before making a decision.
Collapse
Affiliation(s)
- Dong Hyeok Lee
- Department of Brain and Cognitive Sciences, College of Natural Sciences, Seoul National University, Seoul, Republic of Korea
| | - June Sic Kim
- The Research Institute of Basic Sciences, College of Natural Sciences, Seoul National University, Seoul, Republic of Korea
| | - Seokyun Ryun
- Neuroscience Research Institute, Medical Research Center, College of Medicine, Seoul National University, Seoul, Republic of Korea
| | - Chun Kee Chung
- Department of Brain and Cognitive Sciences, College of Natural Sciences, Seoul National University, Seoul, Republic of Korea; Neuroscience Research Institute, Medical Research Center, College of Medicine, Seoul National University, Seoul, Republic of Korea; Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea.
| |
Collapse
|
6
|
Nikbakht N. More Than the Sum of Its Parts: Visual-Tactile Integration in the Behaving Rat. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:37-58. [PMID: 38270852 DOI: 10.1007/978-981-99-7611-9_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
We experience the world by constantly integrating cues from multiple modalities to form unified sensory percepts. Once familiar with multimodal properties of an object, we can recognize it regardless of the modality involved. In this chapter we will examine the case of a visual-tactile orientation categorization experiment in rats. We will explore the involvement of the cerebral cortex in recognizing objects through multiple sensory modalities. In the orientation categorization task, rats learned to examine and judge the orientation of a raised, black and white grating using touch, vision, or both. Their multisensory performance was better than the predictions of linear models for cue combination, indicating synergy between the two sensory channels. Neural recordings made from a candidate associative cortical area, the posterior parietal cortex (PPC), reflected the principal neuronal correlates of the behavioral results: PPC neurons encoded both graded information about the object and categorical information about the animal's decision. Intriguingly single neurons showed identical responses under each of the three modality conditions providing a substrate for a neural circuit in the cortex that is involved in modality-invariant processing of objects.
Collapse
Affiliation(s)
- Nader Nikbakht
- Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
7
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
8
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
9
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
10
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
11
|
Thomas E, Ali FB, Tolambiya A, Chambellant F, Gaveau J. Too much information is no information: how machine learning and feature selection could help in understanding the motor control of pointing. Front Big Data 2023; 6:921355. [PMID: 37546547 PMCID: PMC10399757 DOI: 10.3389/fdata.2023.921355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 06/16/2023] [Indexed: 08/08/2023] Open
Abstract
The aim of this study was to develop the use of Machine Learning techniques as a means of multivariate analysis in studies of motor control. These studies generate a huge amount of data, the analysis of which continues to be largely univariate. We propose the use of machine learning classification and feature selection as a means of uncovering feature combinations that are altered between conditions. High dimensional electromyogram (EMG) vectors were generated as several arm and trunk muscles were recorded while subjects pointed at various angles above and below the gravity neutral horizontal plane. We used Linear Discriminant Analysis (LDA) to carry out binary classifications between the EMG vectors for pointing at a particular angle, vs. pointing at the gravity neutral direction. Classification success provided a composite index of muscular adjustments for various task constraints-in this case, pointing angles. In order to find the combination of features that were significantly altered between task conditions, we conducted a post classification feature selection i.e., investigated which combination of features had allowed for the classification. Feature selection was done by comparing the representations of each category created by LDA for the classification. In other words computing the difference between the representations of each class. We propose that this approach will help with comparing high dimensional EMG patterns in two ways; (i) quantifying the effects of the entire pattern rather than using single arbitrarily defined variables and (ii) identifying the parts of the patterns that convey the most information regarding the investigated effects.
Collapse
Affiliation(s)
- Elizabeth Thomas
- INSERMU1093, UFR STAPS, Université de Bourgogne Franche Comté, Dijon, France
| | - Ferid Ben Ali
- School of Engineering and Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| | - Arvind Tolambiya
- Applied Intelligence Hub, Accenture Solutions Private Ltd., Hyderabad, Telangana, India
| | - Florian Chambellant
- INSERMU1093, UFR STAPS, Université de Bourgogne Franche Comté, Dijon, France
| | - Jérémie Gaveau
- INSERMU1093, UFR STAPS, Université de Bourgogne Franche Comté, Dijon, France
| |
Collapse
|
12
|
Gao Y, Xue K, Odegaard B, Rahnev D. Common computations in automatic cue combination and metacognitive confidence reports. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.07.544029. [PMID: 37333352 PMCID: PMC10274803 DOI: 10.1101/2023.06.07.544029] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Appropriate perceptual decision making necessitates the accurate estimation and use of sensory uncertainty. Such estimation has been studied in the context of both low-level multisensory cue combination and metacognitive estimation of confidence, but it remains unclear whether the same computations underlie both sets of uncertainty estimation. We created visual stimuli with low vs. high overall motion energy, such that the high-energy stimuli led to higher confidence but lower accuracy in a visual-only task. Importantly, we tested the impact of the low- and high-energy visual stimuli on auditory motion perception in a separate task. Despite being irrelevant to the auditory task, both visual stimuli impacted auditory judgments presumably via automatic low-level mechanisms. Critically, we found that the high-energy visual stimuli influenced the auditory judgments more strongly than the low-energy visual stimuli. This effect was in line with the confidence but contrary to the accuracy differences between the high- and low-energy stimuli in the visual-only task. These effects were captured by a simple computational model that assumes common computational principles underlying both confidence reports and multisensory cue combination. Our results reveal a deep link between automatic sensory processing and metacognitive confidence reports, and suggest that vastly different stages of perceptual decision making rely on common computational principles.
Collapse
|
13
|
Abstract
Neural mechanisms of perceptual decision making have been extensively studied in experimental settings that mimic stable environments with repeating stimuli, fixed rules, and payoffs. In contrast, we live in an ever-changing environment and have varying goals and behavioral demands. To accommodate variability, our brain flexibly adjusts decision-making processes depending on context. Here, we review a growing body of research that explores the neural mechanisms underlying this flexibility. We highlight diverse forms of context dependency in decision making implemented through a variety of neural computations. Context-dependent neural activity is observed in a distributed network of brain structures, including posterior parietal, sensory, motor, and subcortical regions, as well as the prefrontal areas classically implicated in cognitive control. We propose that investigating the distributed network underlying flexible decisions is key to advancing our understanding and discuss a path forward for experimental and theoretical investigations.
Collapse
Affiliation(s)
- Gouki Okazawa
- Center for Neural Science, New York University, New York, NY, USA;
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY, USA;
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
14
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
15
|
Masís J, Chapman T, Rhee JY, Cox DD, Saxe AM. Strategically managing learning during perceptual decision making. eLife 2023; 12:64978. [PMID: 36786427 PMCID: PMC9928425 DOI: 10.7554/elife.64978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 01/15/2023] [Indexed: 02/15/2023] Open
Abstract
Making optimal decisions in the face of noise requires balancing short-term speed and accuracy. But a theory of optimality should account for the fact that short-term speed can influence long-term accuracy through learning. Here, we demonstrate that long-term learning is an important dynamical dimension of the speed-accuracy trade-off. We study learning trajectories in rats and formally characterize these dynamics in a theory expressed as both a recurrent neural network and an analytical extension of the drift-diffusion model that learns over time. The model reveals that choosing suboptimal response times to learn faster sacrifices immediate reward, but can lead to greater total reward. We empirically verify predictions of the theory, including a relationship between stimulus exposure and learning speed, and a modulation of reaction time by future learning prospects. We find that rats' strategies approximately maximize total reward over the full learning epoch, suggesting cognitive control over the learning process.
Collapse
Affiliation(s)
- Javier Masís
- Department of Molecular and Cellular Biology, Harvard UniversityCambridgeUnited States,Center for Brain Science, Harvard UniversityCambridgeUnited States
| | - Travis Chapman
- Center for Brain Science, Harvard UniversityCambridgeUnited States
| | - Juliana Y Rhee
- Department of Molecular and Cellular Biology, Harvard UniversityCambridgeUnited States,Center for Brain Science, Harvard UniversityCambridgeUnited States
| | - David D Cox
- Department of Molecular and Cellular Biology, Harvard UniversityCambridgeUnited States,Center for Brain Science, Harvard UniversityCambridgeUnited States
| | - Andrew M Saxe
- Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
16
|
Noel JP, Paredes R, Terrebonne E, Feldman JI, Woynaroski T, Cascio CJ, Seriès P, Wallace MT. Inflexible Updating of the Self-Other Divide During a Social Context in Autism: Psychophysical, Electrophysiological, and Neural Network Modeling Evidence. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2022; 7:756-764. [PMID: 33845169 PMCID: PMC8521572 DOI: 10.1016/j.bpsc.2021.03.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 03/08/2021] [Accepted: 03/29/2021] [Indexed: 01/21/2023]
Abstract
BACKGROUND Autism spectrum disorder (ASD) affects many aspects of life, from social interactions to (multi)sensory processing. Similarly, the condition expresses at a variety of levels of description, from genetics to neural circuits and interpersonal behavior. We attempt to bridge between domains and levels of description by detailing the behavioral, electrophysiological, and putative neural network basis of peripersonal space (PPS) updating in ASD during a social context, given that the encoding of this space relies on appropriate multisensory integration, is malleable by social context, and is thought to delineate the boundary between the self and others. METHODS Fifty (20 male/30 female) young adults, either diagnosed with ASD or age- and sex-matched individuals, took part in a visuotactile reaction time task indexing PPS, while high-density electroencephalography was continuously recorded. Neural network modeling was performed in silico. RESULTS Multisensory psychophysics demonstrates that while PPS in neurotypical individuals shrinks in the presence of others-as to "give space"-this does not occur in ASD. Likewise, electroencephalography recordings suggest that multisensory integration is altered by social context in neurotypical individuals but not in individuals with ASD. Finally, a biologically plausible neural network model shows, as a proof of principle, that PPS updating may be inflexible in ASD owing to the altered excitatory/inhibitory balance that characterizes neural circuits in animal models of ASD. CONCLUSIONS Findings are conceptually in line with recent statistical inference accounts, suggesting diminished flexibility in ASD, and further these observations by suggesting within an example relevant for social cognition that such inflexibility may be due to excitatory/inhibitory imbalances.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Center for Neural Science, New York University, New York, New York.
| | - Renato Paredes
- Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom
| | - Emily Terrebonne
- Undergraduate Neuroscience Program, Vanderbilt University, Nashville, Tennessee; School of Medicine and Health Sciences, George Washington University, Washington, District of Columbia
| | - Jacob I Feldman
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Tiffany Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Carissa J Cascio
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Peggy Seriès
- Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, United Kingdom
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee; Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee; Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
17
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
18
|
Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction. J Neurosci 2022; 42:2344-2355. [PMID: 35091504 PMCID: PMC8936614 DOI: 10.1523/jneurosci.0861-21.2022] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/29/2021] [Accepted: 01/02/2022] [Indexed: 12/16/2022] Open
Abstract
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.
Collapse
|
19
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
20
|
Neurocomputational mechanisms underlying cross-modal associations and their influence on perceptual decisions. Neuroimage 2021; 247:118841. [PMID: 34952232 PMCID: PMC9127393 DOI: 10.1016/j.neuroimage.2021.118841] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 12/07/2021] [Accepted: 12/19/2021] [Indexed: 12/02/2022] Open
Abstract
When exposed to complementary features of information across sensory modalities, our brains formulate cross-modal associations between features of stimuli presented separately to multiple modalities. For example, auditory pitch-visual size associations map high-pitch tones with small-size visual objects, and low-pitch tones with large-size visual objects. Preferential, or congruent, cross-modal associations have been shown to affect behavioural performance, i.e. choice accuracy and reaction time (RT) across multisensory decision-making paradigms. However, the neural mechanisms underpinning such influences in perceptual decision formation remain unclear. Here, we sought to identify when perceptual improvements from associative congruency emerge in the brain during decision formation. In particular, we asked whether such improvements represent ‘early’ sensory processing benefits, or ‘late’ post-sensory changes in decision dynamics. Using a modified version of the Implicit Association Test (IAT), coupled with electroencephalography (EEG), we measured the neural activity underlying the effect of auditory stimulus-driven pitch-size associations on perceptual decision formation. Behavioural results showed that participants responded significantly faster during trials when auditory pitch was congruent, rather than incongruent, with its associative visual size counterpart. We used multivariate Linear Discriminant Analysis (LDA) to characterise the spatiotemporal dynamics of EEG activity underpinning IAT performance. We found an ‘Early’ component (∼100–110 ms post-stimulus onset) coinciding with the time of maximal discrimination of the auditory stimuli, and a ‘Late’ component (∼330–340 ms post-stimulus onset) underlying IAT performance. To characterise the functional role of these components in decision formation, we incorporated a neurally-informed Hierarchical Drift Diffusion Model (HDDM), revealing that the Late component decreases response caution, requiring less sensory evidence to be accumulated, whereas the Early component increased the duration of sensory-encoding processes for incongruent trials. Overall, our results provide a mechanistic insight into the contribution of ‘early’ sensory processing, as well as ‘late’ post-sensory neural representations of associative congruency to perceptual decision formation.
Collapse
|
21
|
Chau E, Murray CA, Shams L. Hierarchical drift diffusion modeling uncovers multisensory benefit in numerosity discrimination tasks. PeerJ 2021; 9:e12273. [PMID: 34760356 PMCID: PMC8556708 DOI: 10.7717/peerj.12273] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 09/19/2021] [Indexed: 11/30/2022] Open
Abstract
Studies of accuracy and reaction time in decision making often observe a speed-accuracy tradeoff, where either accuracy or reaction time is sacrificed for the other. While this effect may mask certain multisensory benefits in performance when accuracy and reaction time are separately measured, drift diffusion models (DDMs) are able to consider both simultaneously. However, drift diffusion models are often limited by large sample size requirements for reliable parameter estimation. One solution to this restriction is the use of hierarchical Bayesian estimation for DDM parameters. Here, we utilize hierarchical drift diffusion models (HDDMs) to reveal a multisensory advantage in auditory-visual numerosity discrimination tasks. By fitting this model with a modestly sized dataset, we also demonstrate that large sample sizes are not necessary for reliable parameter estimation.
Collapse
Affiliation(s)
- Edwin Chau
- Department of Mathematics, University of California, Los Angeles, Los Angeles, California, USA
| | - Carolyn A Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, USA
| | - Ladan Shams
- Department of Psychology, BioEngineering, and Interdepartmental Neuroscience Program, University of California, Los Angeles, Los Angeles, California, USA
| |
Collapse
|
22
|
Khalvati K, Kiani R, Rao RPN. Bayesian inference with incomplete knowledge explains perceptual confidence and its deviations from accuracy. Nat Commun 2021; 12:5704. [PMID: 34588440 PMCID: PMC8481237 DOI: 10.1038/s41467-021-25419-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 08/04/2021] [Indexed: 11/08/2022] Open
Abstract
In perceptual decisions, subjects infer hidden states of the environment based on noisy sensory information. Here we show that both choice and its associated confidence are explained by a Bayesian framework based on partially observable Markov decision processes (POMDPs). We test our model on monkeys performing a direction-discrimination task with post-decision wagering, demonstrating that the model explains objective accuracy and predicts subjective confidence. Further, we show that the model replicates well-known discrepancies of confidence and accuracy, including the hard-easy effect, opposing effects of stimulus variability on confidence and accuracy, dependence of confidence ratings on simultaneous or sequential reports of choice and confidence, apparent difference between choice and confidence sensitivity, and seemingly disproportionate influence of choice-congruent evidence on confidence. These effects may not be signatures of sub-optimal inference or discrepant computational processes for choice and confidence. Rather, they arise in Bayesian inference with incomplete knowledge of the environment.
Collapse
Affiliation(s)
- Koosha Khalvati
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
- Neuroscience Institute, NYU Langone Medical Center, New York, NY, USA
| | - Rajesh P N Rao
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA.
- Center for Neurotechnology, University of Washington, Seattle, WA, USA.
| |
Collapse
|
23
|
Linear Integration of Sensory Evidence over Space and Time Underlies Face Categorization. J Neurosci 2021; 41:7876-7893. [PMID: 34326145 DOI: 10.1523/jneurosci.3055-20.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 07/08/2021] [Accepted: 07/21/2021] [Indexed: 11/21/2022] Open
Abstract
Visual object recognition relies on elaborate sensory processes that transform retinal inputs to object representations, but it also requires decision-making processes that read out object representations and function over prolonged time scales. The computational properties of these decision-making processes remain underexplored for object recognition. Here, we study these computations by developing a stochastic multifeature face categorization task. Using quantitative models and tight control of spatiotemporal visual information, we demonstrate that human subjects (five males, eight females) categorize faces through an integration process that first linearly adds the evidence conferred by task-relevant features over space to create aggregated momentary evidence and then linearly integrates it over time with minimum information loss. Discrimination of stimuli along different category boundaries (e.g., identity or expression of a face) is implemented by adjusting feature weights of spatial integration. This linear but flexible integration process over space and time bridges past studies on simple perceptual decisions to complex object recognition behavior.SIGNIFICANCE STATEMENT Although simple perceptual decision-making such as discrimination of random dot motion has been successfully explained as accumulation of sensory evidence, we lack rigorous experimental paradigms to study the mechanisms underlying complex perceptual decision-making such as discrimination of naturalistic faces. We develop a stochastic multifeature face categorization task as a systematic approach to quantify the properties and potential limitations of the decision-making processes during object recognition. We show that human face categorization could be modeled as a linear integration of sensory evidence over space and time. Our framework to study object recognition as a spatiotemporal integration process is broadly applicable to other object categories and bridges past studies of object recognition and perceptual decision-making.
Collapse
|
24
|
Markkula G, Uludağ Z, Wilkie RM, Billington J. Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection. PLoS Comput Biol 2021; 17:e1009096. [PMID: 34264935 PMCID: PMC8282001 DOI: 10.1371/journal.pcbi.1009096] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 05/19/2021] [Indexed: 11/24/2022] Open
Abstract
Evidence accumulation models provide a dominant account of human decision-making, and have been particularly successful at explaining behavioral and neural data in laboratory paradigms using abstract, stationary stimuli. It has been proposed, but with limited in-depth investigation so far, that similar decision-making mechanisms are involved in tasks of a more embodied nature, such as movement and locomotion, by directly accumulating externally measurable sensory quantities of which the precise, typically continuously time-varying, magnitudes are important for successful behavior. Here, we leverage collision threat detection as a task which is ecologically relevant in this sense, but which can also be rigorously observed and modelled in a laboratory setting. Conventionally, it is assumed that humans are limited in this task by a perceptual threshold on the optical expansion rate-the visual looming-of the obstacle. Using concurrent recordings of EEG and behavioral responses, we disprove this conventional assumption, and instead provide strong evidence that humans detect collision threats by accumulating the continuously time-varying visual looming signal. Generalizing existing accumulator model assumptions from stationary to time-varying sensory evidence, we show that our model accounts for previously unexplained empirical observations and full distributions of detection response. We replicate a pre-response centroparietal positivity (CPP) in scalp potentials, which has previously been found to correlate with accumulated decision evidence. In contrast with these existing findings, we show that our model is capable of predicting the onset of the CPP signature rather than its buildup, suggesting that neural evidence accumulation is implemented differently, possibly in distinct brain regions, in collision detection compared to previously studied paradigms.
Collapse
Affiliation(s)
- Gustav Markkula
- Institute for Transport Studies, University of Leeds, Leeds, United Kingdom
| | - Zeynep Uludağ
- School of Psychology, University of Leeds, Leeds, United Kingdom
| | | | - Jac Billington
- School of Psychology, University of Leeds, Leeds, United Kingdom
| |
Collapse
|
25
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
26
|
Statistical approaches to identifying lapses in psychometric response data. Psychon Bull Rev 2021; 28:1433-1457. [PMID: 33825094 DOI: 10.3758/s13423-021-01876-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2021] [Indexed: 11/08/2022]
Abstract
Psychometric curve fits relate physical stimuli to an observer's performance. In experiments an observer may "lapse" and respond with a random guess, which may negatively impact (e.g., bias) the psychometric fit parameters. A lapse-rate model has been popularized by Wichmann and Hill, which reduces the impact of lapses on other estimated parameters by adding a parameter to model the lapse rate. Since lapses are discrete events, we developed a discrete lapse theory and tested a "lapse identification" algorithm to identify individual outlier trials (i.e., potential lapses) based upon an approximate statistical criterion and discard these trials. Specifically, we focused on stimuli sampled using an adaptive staircase for a one-interval, direction-recognition task (i.e., psychometric function ranging from 0 to 1 and the spread of the curve corresponds to the threshold, which is often a parameter of interest for many fitted psychometric functions). Through simulations, we found that as the lapse rate increased the threshold became substantially overestimated, consistent with earlier analyses. While the lapse-rate model reduced the overestimation of threshold with many lapses, with lower lapse rates it yielded substantial threshold underestimation, though less so when fitting many (e.g., 1,000) trials. In comparison, the lapse-identification algorithm yielded accurate threshold estimates across a wide range of lapse rates (from 0 to 5%), which is critical since the lapse rate is seldom known. We further demonstrate the performance of the lapse-identification algorithm to be suitable for a variety of experimental conditions and conclude with some considerations of its use. In particular, we suggest using the lapse-identification algorithm unless the experiment has many trials (e.g., >500) or if somehow the lapse rate is known to be high (e.g., ≥5%), for which the lapse-rate model approaches remain preferred.
Collapse
|
27
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
28
|
Pisupati S, Chartarifsky-Lynn L, Khanal A, Churchland AK. Lapses in perceptual decisions reflect exploration. eLife 2021; 10:55490. [PMID: 33427198 PMCID: PMC7846276 DOI: 10.7554/elife.55490] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Accepted: 01/10/2021] [Indexed: 12/17/2022] Open
Abstract
Perceptual decision-makers often display a constant rate of errors independent of evidence strength. These ‘lapses’ are treated as a nuisance arising from noise tangential to the decision, e.g. inattention or motor errors. Here, we use a multisensory decision task in rats to demonstrate that these explanations cannot account for lapses’ stimulus dependence. We propose a novel explanation: lapses reflect a strategic trade-off between exploiting known rewarding actions and exploring uncertain ones. We tested this model’s predictions by selectively manipulating one action’s reward magnitude or probability. As uniquely predicted by this model, changes were restricted to lapses associated with that action. Finally, we show that lapses are a powerful tool for assigning decision-related computations to neural structures based on disruption experiments (here, posterior striatum and secondary motor cortex). These results suggest that lapses reflect an integral component of decision-making and are informative about action values in normal and disrupted brain states.
Collapse
Affiliation(s)
- Sashank Pisupati
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States.,CSHL School of Biological Sciences, Cold Spring Harbor, New York, United States
| | - Lital Chartarifsky-Lynn
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States.,CSHL School of Biological Sciences, Cold Spring Harbor, New York, United States
| | - Anup Khanal
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States
| | | |
Collapse
|
29
|
Beierholm U, Rohe T, Ferrari A, Stegle O, Noppeney U. Using the past to estimate sensory uncertainty. eLife 2020; 9:54172. [PMID: 33319749 PMCID: PMC7806269 DOI: 10.7554/elife.54172] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 12/13/2020] [Indexed: 01/14/2023] Open
Abstract
To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
Collapse
Affiliation(s)
- Ulrik Beierholm
- Psychology Department, Durham University, Durham, United Kingdom
| | - Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychology, Friedrich-Alexander University Erlangen-Nuernberg, Erlangen, Germany
| | - Ambra Ferrari
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom
| | - Oliver Stegle
- Max Planck Institute for Intelligent Systems, Tübingen, Germany.,European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.,Division of Computational Genomics and Systems Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany, Heidelberg, Germany
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
30
|
Auditory information enhances post-sensory visual evidence during rapid multisensory decision-making. Nat Commun 2020; 11:5440. [PMID: 33116148 PMCID: PMC7595090 DOI: 10.1038/s41467-020-19306-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/06/2020] [Indexed: 11/08/2022] Open
Abstract
Despite recent progress in understanding multisensory decision-making, a conclusive mechanistic account of how the brain translates the relevant evidence into a decision is lacking. Specifically, it remains unclear whether perceptual improvements during rapid multisensory decisions are best explained by sensory (i.e., ‘Early’) processing benefits or post-sensory (i.e., ‘Late’) changes in decision dynamics. Here, we employ a well-established visual object categorisation task in which early sensory and post-sensory decision evidence can be dissociated using multivariate pattern analysis of the electroencephalogram (EEG). We capitalize on these distinct neural components to identify when and how complementary auditory information influences the encoding of decision-relevant visual evidence in a multisensory context. We show that it is primarily the post-sensory, rather than the early sensory, EEG component amplitudes that are being amplified during rapid audiovisual decision-making. Using a neurally informed drift diffusion model we demonstrate that a multisensory behavioral improvement in accuracy arises from an enhanced quality of the relevant decision evidence, as captured by the post-sensory EEG component, consistent with the emergence of multisensory evidence in higher-order brain areas. A conclusive account on how the brain translates audiovisual evidence into a rapid decision is still lacking. Here, using a neurally-informed modelling approach, the authors show that sounds amplify visual evidence later in the decision process, in line with higher-order multisensory effects.
Collapse
|
31
|
Shinn M, Ehrlich DB, Lee D, Murray JD, Seo H. Confluence of Timing and Reward Biases in Perceptual Decision-Making Dynamics. J Neurosci 2020; 40:7326-7342. [PMID: 32839233 PMCID: PMC7534922 DOI: 10.1523/jneurosci.0544-20.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 08/09/2020] [Accepted: 08/12/2020] [Indexed: 01/22/2023] Open
Abstract
Although the decisions of our daily lives often occur in the context of temporal and reward structures, the impact of such regularities on decision-making strategy is poorly understood. Here, to explore how temporal and reward context modulate strategy, we trained 2 male rhesus monkeys to perform a novel perceptual decision-making task with asymmetric rewards and time-varying evidence reliability. To model the choice and response time patterns, we developed a computational framework for fitting generalized drift-diffusion models, which flexibly accommodate diverse evidence accumulation strategies. We found that a dynamic urgency signal and leaky integration, in combination with two independent forms of reward biases, best capture behavior. We also tested how temporal structure influences urgency by systematically manipulating the temporal structure of sensory evidence, and found that the time course of urgency was affected by temporal context. Overall, our approach identified key components of cognitive mechanisms for incorporating temporal and reward structure into decisions.SIGNIFICANCE STATEMENT In everyday life, decisions are influenced by many factors, including reward structures and stimulus timing. While reward and timing have been characterized in isolation, ecologically valid decision-making involves a multiplicity of factors acting simultaneously. This raises questions about whether the same decision-making strategy is used when these two factors are concurrently manipulated. To address these questions, we trained rhesus monkeys to perform a novel decision-making task with both reward asymmetry and temporal uncertainty. In order to understand their strategy and hint at its neural mechanisms, we used the new generalized drift diffusion modeling framework to model both reward and timing mechanisms. We found two of each reward and timing mechanisms are necessary to explain our data.
Collapse
Affiliation(s)
- Maxwell Shinn
- Department of Psychiatry, Yale University, New Haven, Connecticut 06511
- Interdepartmental Neuroscience Program, Yale University, New Haven, Connecticut 06520
| | - Daniel B Ehrlich
- Department of Psychiatry, Yale University, New Haven, Connecticut 06511
- Interdepartmental Neuroscience Program, Yale University, New Haven, Connecticut 06520
| | - Daeyeol Lee
- Department of Neuroscience, Yale University, New Haven, Connecticut 21218
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, Maryland 21218
- Kavli Discovery Neuroscience Institute, Johns Hopkins University, Baltimore, Maryland 21218
- Department of Psychological and Brain Sciences, Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland 21218
- Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland 21218
| | - John D Murray
- Department of Psychiatry, Yale University, New Haven, Connecticut 06511
- Interdepartmental Neuroscience Program, Yale University, New Haven, Connecticut 06520
| | - Hyojung Seo
- Department of Psychiatry, Yale University, New Haven, Connecticut 06511
- Interdepartmental Neuroscience Program, Yale University, New Haven, Connecticut 06520
| |
Collapse
|
32
|
Shinn M, Lam NH, Murray JD. A flexible framework for simulating and fitting generalized drift-diffusion models. eLife 2020; 9:56938. [PMID: 32749218 PMCID: PMC7462609 DOI: 10.7554/elife.56938] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 08/03/2020] [Indexed: 01/10/2023] Open
Abstract
The drift-diffusion model (DDM) is an important decision-making model in cognitive neuroscience. However, innovations in model form have been limited by methodological challenges. Here, we introduce the generalized drift-diffusion model (GDDM) framework for building and fitting DDM extensions, and provide a software package which implements the framework. The GDDM framework augments traditional DDM parameters through arbitrary user-defined functions. Models are solved numerically by directly solving the Fokker-Planck equation using efficient numerical methods, yielding a 100-fold or greater speedup over standard methodology. This speed allows GDDMs to be fit to data using maximum likelihood on the full response time (RT) distribution. We demonstrate fitting of GDDMs within our framework to both animal and human datasets from perceptual decision-making tasks, with better accuracy and fewer parameters than several DDMs implemented using the latest methodology, to test hypothesized decision-making mechanisms. Overall, our framework will allow for decision-making model innovation and novel experimental designs.
Collapse
Affiliation(s)
- Maxwell Shinn
- Department of Psychiatry, Yale University, New Haven, United States.,Interdepartmental Neuroscience Program, Yale University, New Haven, United States
| | - Norman H Lam
- Department of Physics, Yale University, New Haven, United States
| | - John D Murray
- Department of Psychiatry, Yale University, New Haven, United States.,Interdepartmental Neuroscience Program, Yale University, New Haven, United States.,Department of Physics, Yale University, New Haven, United States
| |
Collapse
|
33
|
Takagaki K, Krug K. The effects of reward and social context on visual processing for perceptual decision-making. CURRENT OPINION IN PHYSIOLOGY 2020. [DOI: 10.1016/j.cophys.2020.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
34
|
Boyce WP, Lindsay A, Zgonnikov A, Rañó I, Wong-Lin K. Optimality and Limitations of Audio-Visual Integration for Cognitive Systems. Front Robot AI 2020; 7:94. [PMID: 33501261 PMCID: PMC7805627 DOI: 10.3389/frobt.2020.00094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 06/09/2020] [Indexed: 11/13/2022] Open
Abstract
Multimodal integration is an important process in perceptual decision-making. In humans, this process has often been shown to be statistically optimal, or near optimal: sensory information is combined in a fashion that minimizes the average error in perceptual representation of stimuli. However, sometimes there are costs that come with the optimization, manifesting as illusory percepts. We review audio-visual facilitations and illusions that are products of multisensory integration, and the computational models that account for these phenomena. In particular, the same optimal computational model can lead to illusory percepts, and we suggest that more studies should be needed to detect and mitigate these illusions, as artifacts in artificial cognitive systems. We provide cautionary considerations when designing artificial cognitive systems with the view of avoiding such artifacts. Finally, we suggest avenues of research toward solutions to potential pitfalls in system design. We conclude that detailed understanding of multisensory integration and the mechanisms behind audio-visual illusions can benefit the design of artificial cognitive systems.
Collapse
Affiliation(s)
- William Paul Boyce
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - Anthony Lindsay
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - Arkady Zgonnikov
- AiTech, Delft University of Technology, Delft, Netherlands
- Department of Cognitive Robotics, Faculty of Mechanical, Maritime, and Materials Engineering, Delft University of Technology, Delft, Netherlands
| | - Iñaki Rañó
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| | - KongFatt Wong-Lin
- Intelligent Systems Research Centre, Ulster University, Magee Campus, Derry Londonderry, Northern Ireland, United Kingdom
| |
Collapse
|
35
|
Velocity influences the relative contributions of visual and vestibular cues to self-acceleration. Exp Brain Res 2020; 238:1423-1432. [DOI: 10.1007/s00221-020-05824-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 04/27/2020] [Indexed: 11/29/2022]
|
36
|
Hernández-Pérez R, Rojas-Hortelano E, de Lafuente V. Integrating Somatosensory Information Over Time. Neuroscience 2020; 433:72-80. [DOI: 10.1016/j.neuroscience.2020.02.037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 01/17/2020] [Accepted: 02/21/2020] [Indexed: 10/24/2022]
|
37
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
38
|
Colonius H, Diederich A. Formal models and quantitative measures of multisensory integration: a selective overview. Eur J Neurosci 2020; 51:1161-1178. [DOI: 10.1111/ejn.13813] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 12/18/2017] [Accepted: 12/20/2017] [Indexed: 11/26/2022]
Affiliation(s)
- Hans Colonius
- Department of Psychology Carl von Ossietzky Universität Oldenburg Oldenburg 26111 Germany
- Department of Psychological Sciences Purdue University West Lafayette IN USA
| | - Adele Diederich
- Department of Psychological Sciences Purdue University West Lafayette IN USA
- Life Sciences and Chemistry Jacobs University Bremen Bremen Germany
| |
Collapse
|
39
|
Cortical circuits for integration of self-motion and visual-motion signals. Curr Opin Neurobiol 2019; 60:122-128. [PMID: 31869592 DOI: 10.1016/j.conb.2019.11.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 11/13/2019] [Accepted: 11/15/2019] [Indexed: 12/19/2022]
Abstract
The cerebral cortex contains cells which respond to movement of the head, and these cells are thought to be involved in the perception of self-motion. In particular, studies in the primary visual cortex of mice show that both running speed and passive whole-body rotation modulates neuronal activity, and modern genetically targeted viral tracing approaches have begun to identify previously unknown circuits that underlie these responses. Here we review recent experimental findings and provide a road map for future work in mice to elucidate the functional architecture and emergent properties of a cortical network potentially involved in the generation of egocentric-based visual representations for navigation.
Collapse
|
40
|
Drugowitsch J, Mendonça AG, Mainen ZF, Pouget A. Learning optimal decisions with confidence. Proc Natl Acad Sci U S A 2019; 116:24872-24880. [PMID: 31732671 PMCID: PMC6900530 DOI: 10.1073/pnas.1906787116] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Diffusion decision models (DDMs) are immensely successful models for decision making under uncertainty and time pressure. In the context of perceptual decision making, these models typically start with two input units, organized in a neuron-antineuron pair. In contrast, in the brain, sensory inputs are encoded through the activity of large neuronal populations. Moreover, while DDMs are wired by hand, the nervous system must learn the weights of the network through trial and error. There is currently no normative theory of learning in DDMs and therefore no theory of how decision makers could learn to make optimal decisions in this context. Here, we derive such a rule for learning a near-optimal linear combination of DDM inputs based on trial-by-trial feedback. The rule is Bayesian in the sense that it learns not only the mean of the weights but also the uncertainty around this mean in the form of a covariance matrix. In this rule, the rate of learning is proportional (respectively, inversely proportional) to confidence for incorrect (respectively, correct) decisions. Furthermore, we show that, in volatile environments, the rule predicts a bias toward repeating the same choice after correct decisions, with a bias strength that is modulated by the previous choice's difficulty. Finally, we extend our learning rule to cases for which one of the choices is more likely a priori, which provides insights into how such biases modulate the mechanisms leading to optimal decisions in diffusion models.
Collapse
Affiliation(s)
- Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA 02115;
| | - André G Mendonça
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Zachary F Mainen
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Alexandre Pouget
- Department of Basic Neuroscience, University of Geneva, CH-1211 Geneva, Switzerland
| |
Collapse
|
41
|
Chandrasekaran C, Hawkins GE. ChaRTr: An R toolbox for modeling choices and response times in decision-making tasks. J Neurosci Methods 2019; 328:108432. [PMID: 31586868 PMCID: PMC6980795 DOI: 10.1016/j.jneumeth.2019.108432] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 08/01/2019] [Accepted: 09/07/2019] [Indexed: 11/25/2022]
Abstract
BACKGROUND Decision-making is the process of choosing and performing actions in response to sensory cues to achieve behavioral goals. Many mathematical models have been developed to describe the choice behavior and response time (RT) distributions of observers performing decision-making tasks. However, relatively few researchers use these models because it demands expertise in various numerical, statistical, and software techniques. NEW METHOD We present a toolbox - Choices and Response Times in R, or ChaRTr - that provides the user the ability to implement and test a wide variety of decision-making models ranging from classic through to modern versions of the diffusion decision model, to models with urgency signals, or collapsing boundaries. RESULTS In three different case studies, we demonstrate how ChaRTr can be used to effortlessly discriminate between multiple models of decision-making behavior. We also provide guidance on how to extend the toolbox to incorporate future developments in decision-making models. COMPARISON WITH EXISTING METHOD(S) Existing software packages surmounted some of the numerical issues but have often focused on the classical decision-making model, the diffusion decision model. Recent models that posit roles for urgency, time-varying decision thresholds, noise in various aspects of the decision-formation process or low pass filtering of sensory evidence have proven to be challenging to incorporate in a coherent software framework that permits quantitative evaluation among these competing classes of decision-making models. CONCLUSION ChaRTr can be used to make insightful statements about the cognitive processes underlying observed decision-making behavior and ultimately for deeper insights into decision mechanisms.
Collapse
Affiliation(s)
- Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA; Department of Anatomy & Neurobiology, Boston University School of Medicine, Boston, MA, USA; Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| | - Guy E Hawkins
- School of Psychology, University of Newcastle, Australia.
| |
Collapse
|
42
|
Hou H, Zheng Q, Zhao Y, Pouget A, Gu Y. Neural Correlates of Optimal Multisensory Decision Making under Time-Varying Reliabilities with an Invariant Linear Probabilistic Population Code. Neuron 2019; 104:1010-1021.e10. [DOI: 10.1016/j.neuron.2019.08.038] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 07/21/2019] [Accepted: 08/22/2019] [Indexed: 12/27/2022]
|
43
|
Medendorp WP, Heed T. State estimation in posterior parietal cortex: Distinct poles of environmental and bodily states. Prog Neurobiol 2019; 183:101691. [DOI: 10.1016/j.pneurobio.2019.101691] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 08/12/2019] [Accepted: 08/29/2019] [Indexed: 01/06/2023]
|
44
|
Jones SA, Beierholm U, Meijer D, Noppeney U. Older adults sacrifice response speed to preserve multisensory integration performance. Neurobiol Aging 2019; 84:148-157. [PMID: 31586863 DOI: 10.1016/j.neurobiolaging.2019.08.017] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 07/10/2019] [Accepted: 08/17/2019] [Indexed: 01/27/2023]
Abstract
Aging has been shown to impact multisensory perception, but the underlying computational mechanisms are unclear. For effective interactions with the environment, observers should integrate signals that share a common source, weighted by their reliabilities, and segregate those from separate sources. Observers are thought to accumulate evidence about the world's causal structure over time until a decisional threshold is reached. Combining psychophysics and Bayesian modeling, we investigated how aging affects audiovisual perception of spatial signals. Older and younger adults were comparable in their final localization and common-source judgment responses under both speeded and unspeeded conditions, but were disproportionately slower for audiovisually incongruent trials. Bayesian modeling showed that aging did not affect the ability to arbitrate between integration and segregation under either unspeeded or speeded conditions. However, modeling the within-trial dynamics of evidence accumulation under speeded conditions revealed that older observers accumulate noisier auditory representations for longer, set higher decisional thresholds, and have impaired motor speed. Older observers preserve audiovisual localization performance, despite noisier sensory representations, by sacrificing response speed.
Collapse
Affiliation(s)
- Samuel A Jones
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK; The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | | | - David Meijer
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
45
|
Chandrasekaran C, Blurton SP, Gondan M. Audiovisual detection at different intensities and delays. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2019; 91:159-175. [PMID: 31404455 PMCID: PMC6688765 DOI: 10.1016/j.jmp.2019.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In the redundant signals task, two target stimuli are associated with the same response. If both targets are presented together, redundancy gains are observed, as compared with single-target presentation. Different models explain these redundancy gains, including race and coactivation models (e.g., the Wiener diffusion superposition model, Schwarz, 1994, Journal of Mathematical Psychology, and the Ornstein Uhlenbeck diffusion superposition model, Diederich, 1995, Journal of Mathematical Psychology). In the present study, two monkeys performed a simple detection task with auditory, visual and audiovisual stimuli of different intensities and onset asynchronies. In its basic form, a Wiener diffusion superposition model provided only a poor description of the observed data, especially of the detection rate (i.e., accuracy or hit rate) for low stimulus intensity. We expanded the model in two ways, by (A) adding a temporal deadline, that is, restricting the evidence accumulation process to a stopping time, and (B) adding a second "nogo" barrier representing target absence. We present closed-form solutions for the mean absorption times and absorption probabilities for a Wiener diffusion process with a drift towards a single barrier in the presence of a temporal deadline (A), and numerically improved solutions for the two-barrier model (B). The best description of the data was obtained from the deadline model and substantially outperformed the two-barrier approach.
Collapse
Affiliation(s)
- Chandramouli Chandrasekaran
- Department of Electrical Engineering, Stanford University, USA
- Howard Hughes Medical Institute, Stanford University, USA
- Department of Psychological and Brain Sciences, Boston University, USA
- Department of Anatomy and Neurobiology, Boston University, USA
| | | | | |
Collapse
|
46
|
Canal–otolith interactions alter the perception of self-motion direction. Atten Percept Psychophys 2019; 81:1698-1714. [DOI: 10.3758/s13414-019-01691-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
|
48
|
Abstract
The Bayesian model of confidence posits that confidence reflects the observer's posterior probability that the decision is correct. Hangya, Sanders, and Kepecs ( 2016 ) have proposed that researchers can test the Bayesian model by deriving qualitative signatures of Bayesian confidence (i.e., patterns that one would expect to see if an observer were Bayesian) and looking for those signatures in human or animal data. We examine two proposed signatures, showing that their derivations contain hidden assumptions that limit their applicability and that they are neither necessary nor sufficient conditions for Bayesian confidence. One signature is an average confidence of 0.75 on trials with neutral evidence. This signature holds only when class-conditioned stimulus distributions do not overlap and when internal noise is very low. Another signature is that as stimulus magnitude increases, confidence increases on correct trials but decreases on incorrect trials. This divergence signature holds only when stimulus distributions do not overlap or when noise is high. Navajas et al. ( 2017 ) have proposed an alternative form of this signature; we find no indication that this alternative form is expected under Bayesian confidence. Our observations give us pause about the usefulness of the qualitative signatures of Bayesian confidence. To determine the nature of the computations underlying confidence reports, there may be no shortcut to quantitative model comparison.
Collapse
Affiliation(s)
- William T. Adler
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY 10003, U.S.A
| |
Collapse
|
49
|
Abstract
Detection of the state of self-motion, such as the instantaneous heading direction, the traveled trajectory and traveled distance or time, is critical for efficient spatial navigation. Numerous psychophysical studies have indicated that the vestibular system, originating from the otolith and semicircular canals in our inner ears, provides robust signals for different aspects of self-motion perception. In addition, vestibular signals interact with other sensory signals such as visual optic flow to facilitate natural navigation. These behavioral results are consistent with recent findings in neurophysiological studies. In particular, vestibular activity in response to the translation or rotation of the head/body in darkness is revealed in a growing number of cortical regions, many of which are also sensitive to visual motion stimuli. The temporal dynamics of the vestibular activity in the central nervous system can vary widely, ranging from acceleration-dominant to velocity-dominant. Different temporal dynamic signals may be decoded by higher level areas for different functions. For example, the acceleration signals during the translation of body in the horizontal plane may be used by the brain to estimate the heading directions. Although translation and rotation signals arise from independent peripheral organs, that is, otolith and canals, respectively, they frequently converge onto single neurons in the central nervous system including both the brainstem and the cerebral cortex. The convergent neurons typically exhibit stronger responses during a combined curved motion trajectory which may serve as the neural correlate for complex path perception. During spatial navigation, traveled distance or time may be encoded by different population of neurons in multiple regions including hippocampal-entorhinal system, posterior parietal cortex, or frontal cortex.
Collapse
Affiliation(s)
- Zhixian Cheng
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, United States
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
50
|
Currier TA, Nagel KI. Multisensory Control of Orientation in Tethered Flying Drosophila. Curr Biol 2018; 28:3533-3546.e6. [PMID: 30393038 DOI: 10.1016/j.cub.2018.09.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 08/21/2018] [Accepted: 09/11/2018] [Indexed: 11/28/2022]
Abstract
A longstanding goal of systems neuroscience is to quantitatively describe how the brain integrates sensory cues over time. Here, we develop a closed-loop orienting paradigm in Drosophila to study the algorithms by which cues from two modalities are integrated during ongoing behavior. We find that flies exhibit two behaviors when presented simultaneously with an attractive visual stripe and aversive wind cue. First, flies perform a turn sequence where they initially turn away from the wind but later turn back toward the stripe, suggesting dynamic sensory processing. Second, turns toward the stripe are slowed by the presence of competing wind, suggesting summation of turning drives. We develop a model in which signals from each modality are filtered in space and time to generate turn commands and then summed to produce ongoing orienting behavior. This computational framework correctly predicts behavioral dynamics for a range of stimulus intensities and spatial arrangements.
Collapse
Affiliation(s)
- Timothy A Currier
- Neuroscience Institute, New York University Medical Center, 435 E. 30(th) Street, New York, NY 10016, USA; Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA
| | - Katherine I Nagel
- Neuroscience Institute, New York University Medical Center, 435 E. 30(th) Street, New York, NY 10016, USA; Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA.
| |
Collapse
|