1
|
Tan D, Zhang Z, Shi H, Sun N, Li Q, Bi S, Huang J, Liu Y, Guo Q, Jiang C. Bioinspired Artificial Visual-Respiratory Synapse as Multimodal Scene Recognition System with Oxidized-Vacancies MXene. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024:e2407751. [PMID: 39011791 DOI: 10.1002/adma.202407751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 06/27/2024] [Indexed: 07/17/2024]
Abstract
In the pursuit of artificial neural systems, the integration of multimodal plasticity, memory retention, and perceptual functions stands as a paramount objective in achieving neuromorphic perceptual components inspired by the human brain, to emulating the neurological excitability tuning observed in human visual and respiratory collaborations. Here, an artificial visual-respiratory synapse is presented with monolayer oxidized MXene (VRSOM) exhibiting synergistic light and atmospheric plasticity. The VRSOM enables to realize facile modulation of synaptic behaviors, encompassing postsynaptic current, sustained photoconductivity, stable facilitation/depression properties, and "learning-experience" behavior. These performances rely on the privileged photocarrier trapping characteristics and the hydroxyl-preferential selectivity inherent of oxidized vacancies. Moreover, environment recognitions and multimodal neural network image identifications are achieved through multisensory integration, underscoring the potential of the VRSOM in reproducing human-like perceptual attributes. The VRSOM platform holds significant promise for hardware output of human-like mixed-modal interactions and paves the way for perceiving multisensory neural behaviors in artificial interactive devices.
Collapse
Affiliation(s)
- Dongchen Tan
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| | - Zhaorui Zhang
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| | - Haohao Shi
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| | - Nan Sun
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| | - Qikun Li
- School of Advanced Materials and Nanotechnology, Xidian University, Xi'an, 710126, China
| | - Sheng Bi
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| | - Jijie Huang
- School of Materials Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Yiheng Liu
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| | - Qinglei Guo
- Department of Material Science and Engineering, Frederick Seitz Material Research Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
| | - Chengming Jiang
- State Key Laboratory of High-Performance Precision Manufacturing, Dalian University of Technology, Dalian, 116024, China
| |
Collapse
|
2
|
Ghosh M, Béna G, Bormuth V, Goodman DFM. Nonlinear fusion is optimal for a wide class of multisensory tasks. PLoS Comput Biol 2024; 20:e1012246. [PMID: 38968324 PMCID: PMC11253934 DOI: 10.1371/journal.pcbi.1012246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 07/17/2024] [Accepted: 06/11/2024] [Indexed: 07/07/2024] Open
Abstract
Animals continuously detect information via multiple sensory channels, like vision and hearing, and integrate these signals to realise faster and more accurate decisions; a fundamental neural computation known as multisensory integration. A widespread view of this process is that multimodal neurons linearly fuse information across sensory channels. However, does linear fusion generalise beyond the classical tasks used to explore multisensory integration? Here, we develop novel multisensory tasks, which focus on the underlying statistical relationships between channels, and deploy models at three levels of abstraction: from probabilistic ideal observers to artificial and spiking neural networks. Using these models, we demonstrate that when the information provided by different channels is not independent, linear fusion performs sub-optimally and even fails in extreme cases. This leads us to propose a simple nonlinear algorithm for multisensory integration which is compatible with our current knowledge of multimodal circuits, excels in naturalistic settings and is optimal for a wide class of multisensory tasks. Thus, our work emphasises the role of nonlinear fusion in multisensory integration, and provides testable hypotheses for the field to explore at multiple levels: from single neurons to behaviour.
Collapse
Affiliation(s)
- Marcus Ghosh
- Laboratoire Jean Perrin, Institut de Biologie Paris-Seine, CNRS, Sorbonne Université, Paris, France
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Gabriel Béna
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Volker Bormuth
- Laboratoire Jean Perrin, Institut de Biologie Paris-Seine, CNRS, Sorbonne Université, Paris, France
| | - Dan F. M. Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| |
Collapse
|
3
|
Zhao C, Liu A, Zhang X, Cao X, Ding Z, Sha Q, Shen H, Deng HW, Zhou W. CLCLSA: Cross-omics linked embedding with contrastive learning and self attention for integration with incomplete multi-omics data. Comput Biol Med 2024; 170:108058. [PMID: 38295477 PMCID: PMC10959569 DOI: 10.1016/j.compbiomed.2024.108058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/30/2023] [Accepted: 01/26/2024] [Indexed: 02/02/2024]
Abstract
Integration of heterogeneous and high-dimensional multi-omics data is becoming increasingly important in understanding etiology of complex genetic diseases. Each omics technique only provides a limited view of the underlying biological process and integrating heterogeneous omics layers simultaneously would lead to a more comprehensive and detailed understanding of diseases and phenotypes. However, one obstacle faced when performing multi-omics data integration is the existence of unpaired multi-omics data due to instrument sensitivity and cost. Studies may fail if certain aspects of the subjects are missing or incomplete. In this paper, we propose a deep learning method for multi-omics integration with incomplete data by Cross-omics Linked unified embedding with Contrastive Learning and Self Attention (CLCLSA). Utilizing complete multi-omics data as supervision, the model employs cross-omics autoencoders to learn the feature representation across different types of biological data. The multi-omics contrastive learning is employed, which maximizes the mutual information between different types of omics. In addition, the feature-level self-attention and omics-level self-attention are employed to dynamically identify the most informative features for multi-omics data integration. Finally, a Softmax classifier is employed to perform multi-omics data classification. Extensive experiments were conducted on four public multi-omics datasets. The experimental results indicate that our proposed CLCLSA produces promising results in multi-omics data classification using both complete and incomplete multi-omics data.
Collapse
Affiliation(s)
- Chen Zhao
- Department of Computer Science, Kennesaw State University, Marietta, GA, 30060, USA
| | - Anqi Liu
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, New Orleans, LA, 70112, USA
| | - Xiao Zhang
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, New Orleans, LA, 70112, USA
| | - Xuewei Cao
- Department of Mathematical Sciences, Michigan Technological University, 1400 Townsend Dr, Houghton, MI, 49931, USA
| | - Zhengming Ding
- Department of Computer Science, Tulane University, New Orleans, LA, 70118, USA
| | - Qiuying Sha
- Department of Mathematical Sciences, Michigan Technological University, 1400 Townsend Dr, Houghton, MI, 49931, USA
| | - Hui Shen
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, New Orleans, LA, 70112, USA
| | - Hong-Wen Deng
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, New Orleans, LA, 70112, USA.
| | - Weihua Zhou
- Department of Applied Computing, Michigan Technological University, 1400 Townsend Dr, Houghton, MI, 49931, USA; Center for Biocomputing and Digital Health, Institute of Computing and Cybersystems, and Health Research Institute, Michigan Technological University, Houghton, MI, 49931, USA.
| |
Collapse
|
4
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
5
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
6
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
7
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
8
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
9
|
Coen P, Sit TPH, Wells MJ, Carandini M, Harris KD. Mouse frontal cortex mediates additive multisensory decisions. Neuron 2023; 111:2432-2447.e13. [PMID: 37295419 PMCID: PMC10957398 DOI: 10.1016/j.neuron.2023.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 12/02/2022] [Accepted: 05/10/2023] [Indexed: 06/12/2023]
Abstract
The brain can combine auditory and visual information to localize objects. However, the cortical substrates underlying audiovisual integration remain uncertain. Here, we show that mouse frontal cortex combines auditory and visual evidence; that this combination is additive, mirroring behavior; and that it evolves with learning. We trained mice in an audiovisual localization task. Inactivating frontal cortex impaired responses to either sensory modality, while inactivating visual or parietal cortex affected only visual stimuli. Recordings from >14,000 neurons indicated that after task learning, activity in the anterior part of frontal area MOs (secondary motor cortex) additively encodes visual and auditory signals, consistent with the mice's behavioral strategy. An accumulator model applied to these sensory representations reproduced the observed choices and reaction times. These results suggest that frontal cortex adapts through learning to combine evidence across sensory cortices, providing a signal that is transformed into a binary decision by a downstream accumulator.
Collapse
Affiliation(s)
- Philip Coen
- UCL Queen Square Institute of Neurology, University College London, London, UK; UCL Institute of Ophthalmology, University College London, London, UK.
| | - Timothy P H Sit
- Sainsbury-Wellcome Center, University College London, London, UK
| | - Miles J Wells
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | - Kenneth D Harris
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
10
|
Jiang C, Liu J, Ni Y, Qu S, Liu L, Li Y, Yang L, Xu W. Mammalian-brain-inspired neuromorphic motion-cognition nerve achieves cross-modal perceptual enhancement. Nat Commun 2023; 14:1344. [PMID: 36906637 PMCID: PMC10008641 DOI: 10.1038/s41467-023-36935-w] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/21/2023] [Indexed: 03/13/2023] Open
Abstract
Perceptual enhancement of neural and behavioral response due to combinations of multisensory stimuli are found in many animal species across different sensory modalities. By mimicking the multisensory integration of ocular-vestibular cues for enhanced spatial perception in macaques, a bioinspired motion-cognition nerve based on a flexible multisensory neuromorphic device is demonstrated. A fast, scalable and solution-processed fabrication strategy is developed to prepare a nanoparticle-doped two-dimensional (2D)-nanoflake thin film, exhibiting superior electrostatic gating capability and charge-carrier mobility. The multi-input neuromorphic device fabricated using this thin film shows history-dependent plasticity, stable linear modulation, and spatiotemporal integration capability. These characteristics ensure parallel, efficient processing of bimodal motion signals encoded as spikes and assigned with different perceptual weights. Motion-cognition function is realized by classifying the motion types using mean firing rates of encoded spikes and postsynaptic current of the device. Demonstrations of recognition of human activity types and drone flight modes reveal that the motion-cognition performance match the bio-plausible principles of perceptual enhancement by multisensory integration. Our system can be potentially applied in sensory robotics and smart wearables.
Collapse
Affiliation(s)
- Chengpeng Jiang
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China.,Research Center for Intelligent Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jiaqi Liu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Yao Ni
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Shangda Qu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Lu Liu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Yue Li
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Lu Yang
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China.,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China
| | - Wentao Xu
- Institute of Photoelectronic Thin Film Devices and Technology, Key Laboratory of Photoelectronic Thin Film Devices and Technology of Tianjin, College of Electronic Information and Optical Engineering, Engineering Research Center of Thin Film Photoelectronic Technology of Ministry of Education, School of Materials Science and Engineering, Smart Sensing Interdisciplinary Science Center, Nankai University, Tianjin, 300350, China. .,Shenzhen Research Institute of Nankai University, Shenzhen, 518000, China.
| |
Collapse
|
11
|
Asahina T, Shimba K, Kotani K, Jimbo Y. Improving the accuracy of decoding monkey brain-machine interface data by estimating the state of unobserved cell assemblies. J Neurosci Methods 2023; 385:109764. [PMID: 36476748 DOI: 10.1016/j.jneumeth.2022.109764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 11/27/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND The brain-machine interface is a technology that has been used for improving the quality of life of individuals with physical disabilities and also healthy individuals. It is important to improve the methods used for decoding the brain-machine interface data as the accuracy and speed of movements achieved using the existing technology are not comparable to the normal body. COMPARISON WITH THE EXISTING METHOD Decoding of brain-machine interface data using the proposed method resulted in improved decoding accuracy compared to the existing method. CONCLUSIONS The results demonstrated the usefulness of cell assembly state estimation method for decoding the brain-machine interface data. NEW METHOD We incorporated a novel method of estimating cell assembly states using spike trains with the existing decoding method that used only firing rate data. Synaptic connectivity pattern was used as feature values in addition to firing rate. Publicly available monkey brain-machine interface datasets were used in the study. RESULTS As long as the decoding was successful, the root mean square error of the proposed method was significantly smaller than the existing method. Artificial neural netowork-based decoding method resulted in more stable decoding, and also improved the decoding accuracy due to incorporation of synaptic connectivity pattern.
Collapse
Affiliation(s)
- Takahiro Asahina
- School of Engineering, The University of Tokyo, Tokyo, Japan; Japan Society for the Promotion of Science, Japan.
| | - Kenta Shimba
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kiyoshi Kotani
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Yasuhiko Jimbo
- School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
12
|
Han Z, Zhang C, Fu H, Zhou JT. Trusted Multi-View Classification With Dynamic Evidential Fusion. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:2551-2566. [PMID: 35503823 DOI: 10.1109/tpami.2022.3171983] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Existing multi-view classification algorithms focus on promoting accuracy by exploiting different views, typically integrating them into common representations for follow-up tasks. Although effective, it is also crucial to ensure the reliability of both the multi-view integration and the final decision, especially for noisy, corrupted and out-of-distribution data. Dynamically assessing the trustworthiness of each view for different samples could provide reliable integration. This can be achieved through uncertainty estimation. With this in mind, we propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC), providing a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The proposed TMC can promote classification reliability by considering evidence from each view. Specifically, we introduce the variational Dirichlet to characterize the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness against possible noise or corruption. Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
Collapse
|
13
|
Hu B, Guan ZH, Chen G, Chen CLP. Neuroscience and Network Dynamics Toward Brain-Inspired Intelligence. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10214-10227. [PMID: 33909581 DOI: 10.1109/tcyb.2021.3071110] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article surveys the interdisciplinary research of neuroscience, network science, and dynamic systems, with emphasis on the emergence of brain-inspired intelligence. To replicate brain intelligence, a practical way is to reconstruct cortical networks with dynamic activities that nourish the brain functions, instead of using only artificial computing networks. The survey provides a complex network and spatiotemporal dynamics (abbr. network dynamics) perspective for understanding the brain and cortical networks and, furthermore, develops integrated approaches of neuroscience and network dynamics toward building brain-inspired intelligence with learning and resilience functions. Presented are fundamental concepts and principles of complex networks, neuroscience, and hybrid dynamic systems, as well as relevant studies about the brain and intelligence. Other promising research directions, such as brain science, data science, quantum information science, and machine behavior are also briefly discussed toward future applications.
Collapse
|
14
|
Zhang J, Gu Y, Chen A, Yu Y. Unveiling Dynamic System Strategies for Multisensory Processing: From Neuronal Fixed-Criterion Integration to Population Bayesian Inference. Research (Wash D C) 2022; 2022:9787040. [PMID: 36072271 PMCID: PMC9422331 DOI: 10.34133/2022/9787040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory processing is of vital importance for survival in the external world. Brain circuits can both integrate and separate visual and vestibular senses to infer self-motion and the motion of other objects. However, it is largely debated how multisensory brain regions process such multisensory information and whether they follow the Bayesian strategy in this process. Here, we combined macaque physiological recordings in the dorsal medial superior temporal area (MST-d) with modeling of synaptically coupled multilayer continuous attractor neural networks (CANNs) to study the underlying neuronal circuit mechanisms. In contrast to previous theoretical studies that focused on unisensory direction preference, our analysis showed that synaptic coupling induced cooperation and competition in the multisensory circuit and caused single MST-d neurons to switch between sensory integration or separation modes based on the fixed-criterion causal strategy, which is determined by the synaptic coupling strength. Furthermore, the prior of sensory reliability was represented by pooling diversified criteria at the MST-d population level, and the Bayesian strategy was achieved in downstream neurons whose causal inference flexibly changed with the prior. The CANN model also showed that synaptic input balance is the dynamic origin of neuronal direction preference formation and further explained the misalignment between direction preference and inference observed in previous studies. This work provides a computational framework for a new brain-inspired algorithm underlying multisensory computation.
Collapse
Affiliation(s)
- Jiawei Zhang
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| | - Yong Gu
- Key Laboratory of Primate Neurobiology, Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, 3663 Zhongshan Road N., Shanghai 200062, China
| | - Yuguo Yu
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Shanghai Artificial Intelligence Laboratory, Research Institute of Intelligent and Complex Systems and Institute of Science and Technology for Brain-Inspired Intelligence, Human Phenome Institute, Shanghai 200433, China
| |
Collapse
|
15
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
16
|
Lin CHS, Garrido MI. Towards a cross-level understanding of Bayesian inference in the brain. Neurosci Biobehav Rev 2022; 137:104649. [PMID: 35395333 DOI: 10.1016/j.neubiorev.2022.104649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 02/28/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Perception emerges from unconscious probabilistic inference, which guides behaviour in our ubiquitously uncertain environment. Bayesian decision theory is a prominent computational model that describes how people make rational decisions using noisy and ambiguous sensory observations. However, critical questions have been raised about the validity of the Bayesian framework in explaining the mental process of inference. Firstly, some natural behaviours deviate from Bayesian optimum. Secondly, the neural mechanisms that support Bayesian computations in the brain are yet to be understood. Taking Marr's cross level approach, we review the recent progress made in addressing these challenges. We first review studies that combined behavioural paradigms and modelling approaches to explain both optimal and suboptimal behaviours. Next, we evaluate the theoretical advances and the current evidence for ecologically feasible algorithms and neural implementations in the brain, which may enable probabilistic inference. We argue that this cross-level approach is necessary for the worthwhile pursuit to uncover mechanistic accounts of human behaviour.
Collapse
Affiliation(s)
- Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia.
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia; Australian Research Council for Integrative Brain Function, Australia
| |
Collapse
|
17
|
Zheng Q, Zhou L, Gu Y. Temporal synchrony effects of optic flow and vestibular inputs on multisensory heading perception. Cell Rep 2021; 37:109999. [PMID: 34788608 DOI: 10.1016/j.celrep.2021.109999] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 08/21/2021] [Accepted: 10/21/2021] [Indexed: 11/25/2022] Open
Abstract
Precise heading perception requires integration of optic flow and vestibular cues, yet the two cues often carry distinct temporal dynamics that may confound cue integration benefit. Here, we varied temporal offset between the two sensory inputs while macaques discriminated headings around straight ahead. We find the best heading performance does not occur under natural condition of synchronous inputs with zero offset but rather when visual stimuli are artificially adjusted to lead vestibular by a few hundreds of milliseconds. This amount exactly matches the lag between the vestibular acceleration and visual speed signals as measured from single-unit-activity in frontal and posterior parietal cortices. Manually aligning cues in these areas best facilitates integration with some nonlinear gain modulation effects. These findings are consistent with predictions from a model by which the brain integrates optic flow speed with a faster vestibular acceleration signal for sensing instantaneous heading direction during self-motion in the environment.
Collapse
Affiliation(s)
- Qihao Zheng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Luxin Zhou
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, 201210 Shanghai, China.
| |
Collapse
|
18
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
19
|
Sohn H, Narain D. Neural implementations of Bayesian inference. Curr Opin Neurobiol 2021; 70:121-129. [PMID: 34678599 DOI: 10.1016/j.conb.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/18/2021] [Accepted: 09/09/2021] [Indexed: 10/20/2022]
Abstract
Bayesian inference has emerged as a general framework that captures how organisms make decisions under uncertainty. Recent experimental findings reveal disparate mechanisms for how the brain generates behaviors predicted by normative Bayesian theories. Here, we identify two broad classes of neural implementations for Bayesian inference: a modular class, where each probabilistic component of Bayesian computation is independently encoded and a transform class, where uncertain measurements are converted to Bayesian estimates through latent processes. Many recent experimental neuroscience findings studying probabilistic inference broadly fall into these classes. We identify potential avenues for synthesis across these two classes and the disparities that, at present, cannot be reconciled. We conclude that to distinguish among implementation hypotheses for Bayesian inference, we require greater engagement among theoretical and experimental neuroscientists in an effort that spans different scales of analysis, circuits, tasks, and species.
Collapse
Affiliation(s)
- Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Devika Narain
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, 3015, CN, the Netherlands.
| |
Collapse
|
20
|
Noel JP, Angelaki DE. Cognitive, Systems, and Computational Neurosciences of the Self in Motion. Annu Rev Psychol 2021; 73:103-129. [PMID: 34546803 DOI: 10.1146/annurev-psych-021021-103038] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA; .,Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
21
|
Chen L, Liao HI. Microsaccadic Eye Movements but not Pupillary Dilation Response Characterizes the Crossmodal Freezing Effect. Cereb Cortex Commun 2021; 1:tgaa072. [PMID: 34296132 PMCID: PMC8153075 DOI: 10.1093/texcom/tgaa072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 11/14/2022] Open
Abstract
In typical spatial orienting tasks, the perception of crossmodal (e.g., audiovisual) stimuli evokes greater pupil dilation and microsaccade inhibition than unisensory stimuli (e.g., visual). The characteristic pupil dilation and microsaccade inhibition has been observed in response to "salient" events/stimuli. Although the "saliency" account is appealing in the spatial domain, whether this occurs in the temporal context remains largely unknown. Here, in a brief temporal scale (within 1 s) and with the working mechanism of involuntary temporal attention, we investigated how eye metric characteristics reflect the temporal dynamics of perceptual organization, with and without multisensory integration. We adopted the crossmodal freezing paradigm using the classical Ternus apparent motion. Results showed that synchronous beeps biased the perceptual report for group motion and triggered the prolonged sound-induced oculomotor inhibition (OMI), whereas the sound-induced OMI was not obvious in a crossmodal task-free scenario (visual localization without audiovisual integration). A general pupil dilation response was observed in the presence of sounds in both visual Ternus motion categorization and visual localization tasks. This study provides the first empirical account of crossmodal integration by capturing microsaccades within a brief temporal scale; OMI but not pupillary dilation response characterizes task-specific audiovisual integration (shown by the crossmodal freezing effect).
Collapse
Affiliation(s)
- Lihan Chen
- Department of Brain and Cognitive Sciences, Schools of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
| | - Hsin-I Liao
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0198, Japan
| |
Collapse
|
22
|
Okazawa G, Hatch CE, Mancoo A, Machens CK, Kiani R. Representational geometry of perceptual decisions in the monkey parietal cortex. Cell 2021; 184:3748-3761.e18. [PMID: 34171308 PMCID: PMC8273140 DOI: 10.1016/j.cell.2021.05.022] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/23/2020] [Accepted: 05/17/2021] [Indexed: 11/22/2022]
Abstract
Lateral intraparietal (LIP) neurons represent formation of perceptual decisions involving eye movements. In circuit models for these decisions, neural ensembles that encode actions compete to form decisions. Consequently, representation and readout of the decision variables (DVs) are implemented similarly for decisions with identical competing actions, irrespective of input and task context differences. Further, DVs are encoded as partially potentiated action plans through balance of activity of action-selective ensembles. Here, we test those core principles. We show that in a novel face-discrimination task, LIP firing rates decrease with supporting evidence, contrary to conventional motion-discrimination tasks. These opposite response patterns arise from similar mechanisms in which decisions form along curved population-response manifolds misaligned with action representations. These manifolds rotate in state space based on context, indicating distinct optimal readouts for different tasks. We show similar manifolds in lateral and medial prefrontal cortices, suggesting similar representational geometry across decision-making circuits.
Collapse
Affiliation(s)
- Gouki Okazawa
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Christina E Hatch
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Allan Mancoo
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY 10003, USA; Neuroscience Institute, NYU Langone Medical Center, New York, NY 10016, USA; Department of Psychology, New York University, New York, NY 10003, USA.
| |
Collapse
|
23
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
24
|
Yin D, Kaiser M. Understanding neural flexibility from a multifaceted definition. Neuroimage 2021; 235:118027. [PMID: 33836274 DOI: 10.1016/j.neuroimage.2021.118027] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/19/2021] [Accepted: 03/27/2021] [Indexed: 11/19/2022] Open
Abstract
Flexibility is a hallmark of human intelligence. Emerging studies have proposed several flexibility measurements at the level of individual regions, to produce a brain map of neural flexibility. However, flexibility is usually inferred from separate components of brain activity (i.e., intrinsic/task-evoked), and different definitions are used. Moreover, recent studies have argued that neural processing may be more than a task-driven and intrinsic dichotomy. Therefore, the understanding to neural flexibility is still incomplete. To address this issue, we propose a multifaceted definition of neural flexibility according to three key features: broad cognitive engagement, distributed connectivity, and adaptive connectome dynamics. For these three features, we first review the advances in computational approaches, their functional relevance, and their potential pitfalls. We then suggest a set of metrics that can help us assign a flexibility rating to each region. Subsequently, we present an emergent probabilistic view for further understanding the functional operation of individual regions in the unified framework of intrinsic and task-driven states. Finally, we highlight several areas related to the multifaceted definition of neural flexibility for future research. This review not only strengthens our understanding of flexible human brain, but also suggests that the measure of neural flexibility could bridge the gap between understanding intrinsic and task-driven brain function dynamics.
Collapse
Affiliation(s)
- Dazhi Yin
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.
| | - Marcus Kaiser
- School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, UK; School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK; Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China.
| |
Collapse
|
25
|
Zheng M, Xu J, Keniston L, Wu J, Chang S, Yu L. Choice-dependent cross-modal interaction in the medial prefrontal cortex of rats. Mol Brain 2021; 14:13. [PMID: 33446258 PMCID: PMC7809823 DOI: 10.1186/s13041-021-00732-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022] Open
Abstract
Cross-modal interaction (CMI) could significantly influence the perceptional or decision-making process in many circumstances. However, it remains poorly understood what integrative strategies are employed by the brain to deal with different task contexts. To explore it, we examined neural activities of the medial prefrontal cortex (mPFC) of rats performing cue-guided two-alternative forced-choice tasks. In a task requiring rats to discriminate stimuli based on auditory cue, the simultaneous presentation of an uninformative visual cue substantially strengthened mPFC neurons' capability of auditory discrimination mainly through enhancing the response to the preferred cue. Doing this also increased the number of neurons revealing a cue preference. If the task was changed slightly and a visual cue, like the auditory, denoted a specific behavioral direction, mPFC neurons frequently showed a different CMI pattern with an effect of cross-modal enhancement best evoked in information-congruent multisensory trials. In a choice free task, however, the majority of neurons failed to show a cross-modal enhancement effect and cue preference. These results indicate that CMI at the neuronal level is context-dependent in a way that differs from what has been shown in previous studies.
Collapse
Affiliation(s)
- Mengyao Zheng
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853 USA
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, 200062 China
| |
Collapse
|
26
|
Pisupati S, Chartarifsky-Lynn L, Khanal A, Churchland AK. Lapses in perceptual decisions reflect exploration. eLife 2021; 10:55490. [PMID: 33427198 PMCID: PMC7846276 DOI: 10.7554/elife.55490] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Accepted: 01/10/2021] [Indexed: 12/17/2022] Open
Abstract
Perceptual decision-makers often display a constant rate of errors independent of evidence strength. These ‘lapses’ are treated as a nuisance arising from noise tangential to the decision, e.g. inattention or motor errors. Here, we use a multisensory decision task in rats to demonstrate that these explanations cannot account for lapses’ stimulus dependence. We propose a novel explanation: lapses reflect a strategic trade-off between exploiting known rewarding actions and exploring uncertain ones. We tested this model’s predictions by selectively manipulating one action’s reward magnitude or probability. As uniquely predicted by this model, changes were restricted to lapses associated with that action. Finally, we show that lapses are a powerful tool for assigning decision-related computations to neural structures based on disruption experiments (here, posterior striatum and secondary motor cortex). These results suggest that lapses reflect an integral component of decision-making and are informative about action values in normal and disrupted brain states.
Collapse
Affiliation(s)
- Sashank Pisupati
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States.,CSHL School of Biological Sciences, Cold Spring Harbor, New York, United States
| | - Lital Chartarifsky-Lynn
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States.,CSHL School of Biological Sciences, Cold Spring Harbor, New York, United States
| | - Anup Khanal
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States
| | | |
Collapse
|
27
|
Beierholm U, Rohe T, Ferrari A, Stegle O, Noppeney U. Using the past to estimate sensory uncertainty. eLife 2020; 9:54172. [PMID: 33319749 PMCID: PMC7806269 DOI: 10.7554/elife.54172] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 12/13/2020] [Indexed: 01/14/2023] Open
Abstract
To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.
Collapse
Affiliation(s)
- Ulrik Beierholm
- Psychology Department, Durham University, Durham, United Kingdom
| | - Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department of Psychology, Friedrich-Alexander University Erlangen-Nuernberg, Erlangen, Germany
| | - Ambra Ferrari
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom
| | - Oliver Stegle
- Max Planck Institute for Intelligent Systems, Tübingen, Germany.,European Molecular Biology Laboratory, Genome Biology Unit, Heidelberg, Germany.,Division of Computational Genomics and Systems Genetics, German Cancer Research Center (DKFZ), Heidelberg, Germany, Heidelberg, Germany
| | - Uta Noppeney
- Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
28
|
Yeon J, Rahnev D. The suboptimality of perceptual decision making with multiple alternatives. Nat Commun 2020; 11:3857. [PMID: 32737317 PMCID: PMC7395091 DOI: 10.1038/s41467-020-17661-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/08/2020] [Indexed: 11/23/2022] Open
Abstract
It is becoming widely appreciated that human perceptual decision making is suboptimal but the nature and origins of this suboptimality remain poorly understood. Most past research has employed tasks with two stimulus categories, but such designs cannot fully capture the limitations inherent in naturalistic perceptual decisions where choices are rarely between only two alternatives. We conduct four experiments with tasks involving multiple alternatives and use computational modeling to determine the decision-level representation on which the perceptual decisions are based. The results from all four experiments point to the existence of robust suboptimality such that most of the information in the sensory representation is lost during the transformation to a decision-level representation. These results reveal severe limits in the quality of decision-level representations for multiple alternatives and have strong implications about perceptual decision making in naturalistic settings.
Collapse
Affiliation(s)
- Jiwon Yeon
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
29
|
P U, G G LP. Skeleton-based STIP feature and discriminant sparse coding for human action recognition. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2020. [DOI: 10.1108/ijius-12-2019-0067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeTo find a successful human action recognition system (HAR) for the unmanned environments.Design/methodology/approachThis paper describes the key technology of an efficient HAR system. In this paper, the advancements for three key steps of the HAR system are presented to improve the accuracy of the existing HAR systems. The key steps are feature extraction, feature descriptor and action classification, which are implemented and analyzed. The usage of the implemented HAR system in the self-driving car is summarized. Finally, the results of the HAR system and other existing action recognition systems are compared.FindingsThis paper exhibits the proposed modification and improvements in the HAR system, namely the skeleton-based spatiotemporal interest points (STIP) feature and the improved discriminative sparse descriptor for the identified feature and the linear action classification.Research limitations/implicationsThe experiments are carried out on captured benchmark data sets and need to be analyzed in a real-time environment.Practical implicationsThe middleware support between the proposed HAR system and the self-driven car system provides several other challenging opportunities in research.Social implicationsThe authors’ work provides the way to go a step ahead in machine vision especially in self-driving cars.Originality/valueThe method for extracting the new feature and constructing an improved discriminative sparse feature descriptor has been introduced.
Collapse
|
30
|
Romo R, Rossi-Pool R. Turning Touch into Perception. Neuron 2020; 105:16-33. [PMID: 31917952 DOI: 10.1016/j.neuron.2019.11.033] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 11/16/2019] [Accepted: 11/27/2019] [Indexed: 12/27/2022]
Abstract
Many brain areas modulate their activity during vibrotactile tasks. The activity from these areas may code the stimulus parameters, stimulus perception, or perceptual reports. Here, we discuss findings obtained in behaving monkeys aimed to understand these processes. In brief, neurons from the somatosensory thalamus and primary somatosensory cortex (S1) only code the stimulus parameters during the stimulation periods. In contrast, areas downstream of S1 code the stimulus parameters during not only the task components but also perception. Surprisingly, the midbrain dopamine system is an actor not considered before in perception. We discuss the evidence that it codes the subjective magnitude of a sensory percept. The findings reviewed here may help us to understand where and how sensation transforms into perception in the brain.
Collapse
Affiliation(s)
- Ranulfo Romo
- Instituto de Fisiología Celular - Neurociencias, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico; El Colegio Nacional, 06020 Mexico City, Mexico.
| | - Román Rossi-Pool
- Instituto de Fisiología Celular - Neurociencias, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico.
| |
Collapse
|