1
|
Feuerriegel D. Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling? Cortex 2024; 177:302-320. [PMID: 38905873 DOI: 10.1016/j.cortex.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/10/2024] [Accepted: 06/04/2024] [Indexed: 06/23/2024]
Abstract
Our brains are constantly adapting to changes in our visual environments. Neural adaptation exerts a persistent influence on the activity of sensory neurons and our perceptual experience, however there is a lack of consensus regarding how adaptation is implemented in the visual system. One account describes fatigue-based mechanisms embedded within local networks of stimulus-selective neurons (networked fatigue models). Another depicts adaptation as a product of stimulus expectations (predictive coding models). In this review, I evaluate neuroimaging and psychophysical evidence that poses fundamental problems for predictive coding models of neural adaptation. Specifically, I discuss observations of distinct repetition and expectation effects, as well as incorrect predictions of repulsive adaptation aftereffects made by predictive coding accounts. Based on this evidence, I argue that networked fatigue models provide a more parsimonious account of adaptation effects in the visual system. Although stimulus expectations can be formed based on recent stimulation history, any consequences of these expectations are likely to co-occur (or interact) with effects of fatigue-based adaptation. I conclude by proposing novel, testable hypotheses relating to interactions between fatigue-based adaptation and other predictive processes, focusing on stimulus feature extrapolation phenomena.
Collapse
Affiliation(s)
- Daniel Feuerriegel
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia.
| |
Collapse
|
2
|
Gogliettino AR, Cooler S, Vilkhu RS, Brackbill NJ, Rhoades C, Wu EG, Kling A, Sher A, Litke AM, Chichilnisky EJ. Modeling responses of macaque and human retinal ganglion cells to natural images using a convolutional neural network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.22.586353. [PMID: 38585930 PMCID: PMC10996505 DOI: 10.1101/2024.03.22.586353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Linear-nonlinear (LN) cascade models provide a simple way to capture retinal ganglion cell (RGC) responses to artificial stimuli such as white noise, but their ability to model responses to natural images is limited. Recently, convolutional neural network (CNN) models have been shown to produce light response predictions that were substantially more accurate than those of a LN model. However, this modeling approach has not yet been applied to responses of macaque or human RGCs to natural images. Here, we train and test a CNN model on responses to natural images of the four numerically dominant RGC types in the macaque and human retina - ON parasol, OFF parasol, ON midget and OFF midget cells. Compared with the LN model, the CNN model provided substantially more accurate response predictions. Linear reconstructions of the visual stimulus were more accurate for CNN compared to LN model-generated responses, relative to reconstructions obtained from the recorded data. These findings demonstrate the effectiveness of a CNN model in capturing light responses of major RGC types in the macaque and human retinas in natural conditions.
Collapse
|
3
|
Fu J, Nie C, Sun F, Li G, Shi H, Wei X. Bionic visual-audio photodetectors with in-sensor perception and preprocessing. SCIENCE ADVANCES 2024; 10:eadk8199. [PMID: 38363832 PMCID: PMC10871537 DOI: 10.1126/sciadv.adk8199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 01/17/2024] [Indexed: 02/18/2024]
Abstract
Serving as the "eyes" and "ears" of the Internet of Things, optical and acoustic sensors are the fundamental components in hardware systems. Nowadays, mainstream hardware systems, often comprising numerous discrete sensors, conversion modules, and processing units, tend to result in complex architectures that are less efficient compared to human sensory pathways. Here, a visual-audio photodetector inspired by the human perception system is proposed to enable all-in-one visual and acoustic signal detection with computing capability. This device not only captures light but also optically records sound waves, thus achieving "watching" and "listening" within a single unit. The gate-tunable positive, negative, and zero photoresponses lead to highly programmable responsivities. This programmability enables the execution of diverse functions, including visual feature extraction, object classification, and sound wave manipulation. These results showcase the potential of expanding perception approaches in neuromorphic devices, opening up new possibilities to craft intelligent and compact hardware systems.
Collapse
Affiliation(s)
- Jintao Fu
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Changbin Nie
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Feiying Sun
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Genglin Li
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Haofei Shi
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Xingzhan Wei
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
| |
Collapse
|
4
|
Turner W, Sexton C, Hogendoorn H. Neural mechanisms of visual motion extrapolation. Neurosci Biobehav Rev 2024; 156:105484. [PMID: 38036162 DOI: 10.1016/j.neubiorev.2023.105484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/21/2023] [Accepted: 11/25/2023] [Indexed: 12/02/2023]
Abstract
Because neural processing takes time, the brain only has delayed access to sensory information. When localising moving objects this is problematic, as an object will have moved on by the time its position has been determined. Here, we consider predictive motion extrapolation as a fundamental delay-compensation strategy. From a population-coding perspective, we outline how extrapolation can be achieved by a forwards shift in the population-level activity distribution. We identify general mechanisms underlying such shifts, involving various asymmetries which facilitate the targeted 'enhancement' and/or 'dampening' of population-level activity. We classify these on the basis of their potential implementation (intra- vs inter-regional processes) and consider specific examples in different visual regions. We consider how motion extrapolation can be achieved during inter-regional signaling, and how asymmetric connectivity patterns which support extrapolation can emerge spontaneously from local synaptic learning rules. Finally, we consider how more abstract 'model-based' predictive strategies might be implemented. Overall, we present an integrative framework for understanding how the brain determines the real-time position of moving objects, despite neural delays.
Collapse
Affiliation(s)
- William Turner
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia.
| | | | - Hinze Hogendoorn
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia
| |
Collapse
|
5
|
Matsumoto A, Yonehara K. Emerging computational motifs: Lessons from the retina. Neurosci Res 2023; 196:11-22. [PMID: 37352934 DOI: 10.1016/j.neures.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 06/03/2023] [Accepted: 06/08/2023] [Indexed: 06/25/2023]
Abstract
The retinal neuronal circuit is the first stage of visual processing in the central nervous system. The efforts of scientists over the last few decades indicate that the retina is not merely an array of photosensitive cells, but also a processor that performs various computations. Within a thickness of only ∼200 µm, the retina consists of diverse forms of neuronal circuits, each of which encodes different visual features. Since the discovery of direction-selective cells by Horace Barlow and Richard Hill, the mechanisms that generate direction selectivity in the retina have remained a fascinating research topic. This review provides an overview of recent advances in our understanding of direction-selectivity circuits. Beyond the conventional wisdom of direction selectivity, emerging findings indicate that the retina utilizes complicated and sophisticated mechanisms in which excitatory and inhibitory pathways are involved in the efficient encoding of motion information. As will become evident, the discovery of computational motifs in the retina facilitates an understanding of how sensory systems establish feature selectivity.
Collapse
Affiliation(s)
- Akihiro Matsumoto
- Danish Research Institute of Translational Neuroscience - DANDRITE, Nordic-EMBL Partnership for Molecular Medicine, Department of Biomedicine, Aarhus University, Aarhus, Denmark; Department of Gene Function and Phenomics, National Institute of Genetics, Mishima, Japan; Department of Genetics, The Graduate University for Advanced Studies (SOKENDAI), Mishima, Japan.
| | - Keisuke Yonehara
- Danish Research Institute of Translational Neuroscience - DANDRITE, Nordic-EMBL Partnership for Molecular Medicine, Department of Biomedicine, Aarhus University, Aarhus, Denmark; Department of Gene Function and Phenomics, National Institute of Genetics, Mishima, Japan; Department of Genetics, The Graduate University for Advanced Studies (SOKENDAI), Mishima, Japan
| |
Collapse
|
6
|
Huang PY, Jiang BY, Chen HJ, Xu JY, Wang K, Zhu CY, Hu XY, Li D, Zhen L, Zhou FC, Qin JK, Xu CY. Neuro-inspired optical sensor array for high-accuracy static image recognition and dynamic trace extraction. Nat Commun 2023; 14:6736. [PMID: 37872169 PMCID: PMC10593955 DOI: 10.1038/s41467-023-42488-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 10/12/2023] [Indexed: 10/25/2023] Open
Abstract
Neuro-inspired vision systems hold great promise to address the growing demands of mass data processing for edge computing, a distributed framework that brings computation and data storage closer to the sources of data. In addition to the capability of static image sensing and processing, the hardware implementation of a neuro-inspired vision system also requires the fulfilment of detecting and recognizing moving targets. Here, we demonstrated a neuro-inspired optical sensor based on two-dimensional NbS2/MoS2 hybrid films, which featured remarkable photo-induced conductance plasticity and low electrical energy consumption. A neuro-inspired optical sensor array with 10 × 10 NbS2/MoS2 phototransistors enabled highly integrated functions of sensing, memory, and contrast enhancement capabilities for static images, which benefits convolutional neural network (CNN) with a high image recognition accuracy. More importantly, in-sensor trajectory registration of moving light spots was experimentally implemented such that the post-processing could yield a high restoration accuracy. Our neuro-inspired optical sensor array could provide a fascinating platform for the implementation of high-performance artificial vision systems.
Collapse
Affiliation(s)
- Pei-Yu Huang
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Bi-Yi Jiang
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China
- Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Hong-Ji Chen
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Jia-Yi Xu
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Kang Wang
- Key Laboratory of MEMS of the Ministry of Education, Southeast University, Nanjing, 210096, China
| | - Cheng-Yi Zhu
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Xin-Yan Hu
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Dong Li
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Liang Zhen
- MOE Key Laboratory of Micro-Systems and Micro-Structures Manufacturing, Harbin Institute of Technology, Harbin, 150080, China
| | - Fei-Chi Zhou
- School of Microelectronics, Southern University of Science and Technology, Shenzhen, 518055, China.
| | - Jing-Kai Qin
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| | - Cheng-Yan Xu
- Sauvage Laboratory for Smart Materials, School of Materials Science and Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
- MOE Key Laboratory of Micro-Systems and Micro-Structures Manufacturing, Harbin Institute of Technology, Harbin, 150080, China.
| |
Collapse
|
7
|
Manookin MB, Rieke F. Two Sides of the Same Coin: Efficient and Predictive Neural Coding. Annu Rev Vis Sci 2023; 9:293-311. [PMID: 37220331 DOI: 10.1146/annurev-vision-112122-020941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
8
|
Arthur T, Vine S, Buckingham G, Brosnan M, Wilson M, Harris D. Testing predictive coding theories of autism spectrum disorder using models of active inference. PLoS Comput Biol 2023; 19:e1011473. [PMID: 37695796 PMCID: PMC10529610 DOI: 10.1371/journal.pcbi.1011473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 09/27/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023] Open
Abstract
Several competing neuro-computational theories of autism have emerged from predictive coding models of the brain. To disentangle their subtly different predictions about the nature of atypicalities in autistic perception, we performed computational modelling of two sensorimotor tasks: the predictive use of manual gripping forces during object lifting and anticipatory eye movements during a naturalistic interception task. In contrast to some accounts, we found no evidence of chronic atypicalities in the use of priors or weighting of sensory information during object lifting. Differences in prior beliefs, rates of belief updating, and the precision weighting of prediction errors were, however, observed for anticipatory eye movements. Most notably, we observed autism-related difficulties in flexibly adapting learning rates in response to environmental change (i.e., volatility). These findings suggest that atypical encoding of precision and context-sensitive adjustments provide a better explanation of autistic perception than generic attenuation of priors or persistently high precision prediction errors. Our results did not, however, support previous suggestions that autistic people perceive their environment to be persistently volatile.
Collapse
Affiliation(s)
- Tom Arthur
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, United Kingdom
- Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, United Kingdom
| | - Sam Vine
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, United Kingdom
| | - Gavin Buckingham
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, United Kingdom
| | - Mark Brosnan
- Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, United Kingdom
| | - Mark Wilson
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, United Kingdom
| | - David Harris
- School of Public Health and Sport Sciences, Medical School, University of Exeter, Exeter, United Kingdom
| |
Collapse
|
9
|
Turner W, Blom T, Hogendoorn H. Visual Information Is Predictively Encoded in Occipital Alpha/Low-Beta Oscillations. J Neurosci 2023; 43:5537-5545. [PMID: 37344235 PMCID: PMC10376931 DOI: 10.1523/jneurosci.0135-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/23/2023] Open
Abstract
Hierarchical predictive coding networks are a general model of sensory processing in the brain. Under neural delays, these networks have been suggested to naturally generate oscillatory activity in approximately the α frequency range (∼8-12 Hz). This suggests that α oscillations, a prominent feature of EEG recordings, may be a spectral "fingerprint" of predictive sensory processing. Here, we probed this possibility by investigating whether oscillations over the visual cortex predictively encode visual information. Specifically, we examined whether their power carries information about the position of a moving stimulus, in a temporally predictive fashion. In two experiments (N = 32, 18 female; N = 34, 17 female), participants viewed an apparent-motion stimulus moving along a circular path while EEG was recorded. To investigate the encoding of stimulus-position information, we developed a method of deriving probabilistic spatial maps from oscillatory power estimates. With this method, we demonstrate that it is possible to reconstruct the trajectory of a moving stimulus from α/low-β oscillations, tracking its position even across unexpected motion reversals. We also show that future position representations are activated in the absence of direct visual input, demonstrating that temporally predictive mechanisms manifest in α/β band oscillations. In a second experiment, we replicate these findings and show that the encoding of information in this range is not driven by visual entrainment. By demonstrating that occipital α/β oscillations carry stimulus-related information, in a temporally predictive fashion, we provide empirical evidence of these rhythms as a spectral "fingerprint" of hierarchical predictive processing in the human visual system.SIGNIFICANCE STATEMENT "Hierarchical predictive coding" is a general model of sensory information processing in the brain. When in silico predictive coding models are constrained by neural transmission delays, their activity naturally oscillates in roughly the α range (∼8-12 Hz). Using time-resolved EEG decoding, we show that neural rhythms in this approximate range (α/low-β) over the human visual cortex predictively encode the position of a moving stimulus. From the amplitude of these oscillations, we are able to reconstruct the stimulus' trajectory, revealing signatures of temporally predictive processing. This provides direct neural evidence linking occipital α/β rhythms to predictive visual processing, supporting the emerging view of such oscillations as a potential spectral "fingerprint" of hierarchical predictive processing in the human visual system.
Collapse
Affiliation(s)
- William Turner
- Queensland University of Technology, Brisbane, Queensland 4059, Australia
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria 3010, Australia
| | - Tessel Blom
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria 3010, Australia
| | - Hinze Hogendoorn
- Queensland University of Technology, Brisbane, Queensland 4059, Australia
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria 3010, Australia
| |
Collapse
|
10
|
Krüppel S, Khani MH, Karamanlis D, Erol YC, Zapp SJ, Mietsch M, Protti DA, Rozenblit F, Gollisch T. Diversity of Ganglion Cell Responses to Saccade-Like Image Shifts in the Primate Retina. J Neurosci 2023; 43:5319-5339. [PMID: 37339877 PMCID: PMC10359029 DOI: 10.1523/jneurosci.1561-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 04/12/2023] [Accepted: 05/08/2023] [Indexed: 06/22/2023] Open
Abstract
Saccades are a fundamental part of natural vision. They interrupt fixations of the visual gaze and rapidly shift the image that falls onto the retina. These stimulus dynamics can cause activation or suppression of different retinal ganglion cells, but how they affect the encoding of visual information in different types of ganglion cells is largely unknown. Here, we recorded spiking responses to saccade-like shifts of luminance gratings from ganglion cells in isolated marmoset retinas and investigated how the activity depended on the combination of presaccadic and postsaccadic images. All identified cell types, On and Off parasol and midget cells, as well as a type of Large Off cells, displayed distinct response patterns, including particular sensitivity to either the presaccadic or the postsaccadic image or combinations thereof. In addition, Off parasol and Large Off cells, but not On cells, showed pronounced sensitivity to whether the image changed across the transition. Stimulus sensitivity of On cells could be explained based on their responses to step changes in light intensity, whereas Off cells, in particular, parasol and the Large Off cells, seem to be affected by additional interactions that are not triggered during simple light-intensity flashes. Together, our data show that ganglion cells in the primate retina are sensitive to different combinations of presaccadic and postsaccadic visual stimuli. This contributes to the functional diversity of the output signals of the retina and to asymmetries between On and Off pathways and provides evidence of signal processing beyond what is triggered by isolated steps in light intensity.SIGNIFICANCE STATEMENT Sudden eye movements (saccades) shift our direction of gaze, bringing new images in focus on our retinas. To study how retinal neurons deal with these rapid image transitions, we recorded spiking activity from ganglion cells, the output neurons of the retina, in isolated retinas of marmoset monkeys while shifting a projected image in a saccade-like fashion across the retina. We found that the cells do not just respond to the newly fixated image, but that different types of ganglion cells display different sensitivities to the presaccadic and postsaccadic stimulus patterns. Certain Off cells, for example, are sensitive to changes in the image across transitions, which contributes to differences between On and Off information channels and extends the range of encoded stimulus features.
Collapse
Affiliation(s)
- Steffen Krüppel
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, 37075 Göttingen, Germany
| | - Mohammad H Khani
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
- International Max Planck Research School for Neurosciences, 37077 Göttingen, Germany
| | - Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
- International Max Planck Research School for Neurosciences, 37077 Göttingen, Germany
| | - Yunus C Erol
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
- International Max Planck Research School for Neurosciences, 37077 Göttingen, Germany
| | - Sören J Zapp
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
| | - Matthias Mietsch
- Laboratory Animal Science Unit, German Primate Center, 37077 Göttingen, Germany
- German Center for Cardiovascular Research, 37075 Göttingen, Germany
| | - Dario A Protti
- School of Medical Sciences (Neuroscience), The University of Sydney, Sydney 2006, New South Wales, Australia
| | - Fernando Rozenblit
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, 37075 Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, 37075 Göttingen, Germany
| |
Collapse
|
11
|
Sihn D, Kwon OS, Kim SP. Robust and efficient representations of dynamic stimuli in hierarchical neural networks via temporal smoothing. Front Comput Neurosci 2023; 17:1164595. [PMID: 37398935 PMCID: PMC10307978 DOI: 10.3389/fncom.2023.1164595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/24/2023] [Indexed: 07/04/2023] Open
Abstract
Introduction Efficient coding that minimizes informational redundancy of neural representations is a widely accepted neural coding principle. Despite the benefit, maximizing efficiency in neural coding can make neural representation vulnerable to random noise. One way to achieve robustness against random noise is smoothening neural responses. However, it is not clear whether the smoothness of neural responses can hold robust neural representations when dynamic stimuli are processed through a hierarchical brain structure, in which not only random noise but also systematic error due to temporal lag can be induced. Methods In the present study, we showed that smoothness via spatio-temporally efficient coding can achieve both efficiency and robustness by effectively dealing with noise and neural delay in the visual hierarchy when processing dynamic visual stimuli. Results The simulation results demonstrated that a hierarchical neural network whose bidirectional synaptic connections were learned through spatio-temporally efficient coding with natural scenes could elicit neural responses to visual moving bars similar to those to static bars with the identical position and orientation, indicating robust neural responses against erroneous neural information. It implies that spatio-temporally efficient coding preserves the structure of visual environments locally in the neural responses of hierarchical structures. Discussion The present results suggest the importance of a balance between efficiency and robustness in neural coding for visual processing of dynamic stimuli across hierarchical brain structures.
Collapse
|
12
|
Johnson PA, Blom T, van Gaal S, Feuerriegel D, Bode S, Hogendoorn H. Position representations of moving objects align with real-time position in the early visual response. eLife 2023; 12:e82424. [PMID: 36656268 PMCID: PMC9851612 DOI: 10.7554/elife.82424] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 11/16/2022] [Indexed: 01/20/2023] Open
Abstract
When interacting with the dynamic world, the brain receives outdated sensory information, due to the time required for neural transmission and processing. In motion perception, the brain may overcome these fundamental delays through predictively encoding the position of moving objects using information from their past trajectories. In the present study, we evaluated this proposition using multivariate analysis of high temporal resolution electroencephalographic data. We tracked neural position representations of moving objects at different stages of visual processing, relative to the real-time position of the object. During early stimulus-evoked activity, position representations of moving objects were activated substantially earlier than the equivalent activity evoked by unpredictable flashes, aligning the earliest representations of moving stimuli with their real-time positions. These findings indicate that the predictability of straight trajectories enables full compensation for the neural delays accumulated early in stimulus processing, but that delays still accumulate across later stages of cortical processing.
Collapse
|
13
|
Koren V, Bondanelli G, Panzeri S. Computational methods to study information processing in neural circuits. Comput Struct Biotechnol J 2023; 21:910-922. [PMID: 36698970 PMCID: PMC9851868 DOI: 10.1016/j.csbj.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 01/13/2023] Open
Abstract
The brain is an information processing machine and thus naturally lends itself to be studied using computational tools based on the principles of information theory. For this reason, computational methods based on or inspired by information theory have been a cornerstone of practical and conceptual progress in neuroscience. In this Review, we address how concepts and computational tools related to information theory are spurring the development of principled theories of information processing in neural circuits and the development of influential mathematical methods for the analyses of neural population recordings. We review how these computational approaches reveal mechanisms of essential functions performed by neural circuits. These functions include efficiently encoding sensory information and facilitating the transmission of information to downstream brain areas to inform and guide behavior. Finally, we discuss how further progress and insights can be achieved, in particular by studying how competing requirements of neural encoding and readout may be optimally traded off to optimize neural information processing.
Collapse
Affiliation(s)
- Veronika Koren
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
| | | | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany,Istituto Italiano di Tecnologia, Via Melen 83, Genova 16152, Italy,Corresponding author at: Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany.
| |
Collapse
|
14
|
Gaynes JA, Budoff SA, Grybko MJ, Hunt JB, Poleg-Polsky A. Classical center-surround receptive fields facilitate novel object detection in retinal bipolar cells. Nat Commun 2022; 13:5575. [PMID: 36163249 PMCID: PMC9512824 DOI: 10.1038/s41467-022-32761-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 08/16/2022] [Indexed: 11/11/2022] Open
Abstract
Antagonistic interactions between center and surround receptive field (RF) components lie at the heart of the computations performed in the visual system. Circularly symmetric center-surround RFs are thought to enhance responses to spatial contrasts (i.e., edges), but how visual edges affect motion processing is unclear. Here, we addressed this question in retinal bipolar cells, the first visual neuron with classic center-surround interactions. We found that bipolar glutamate release emphasizes objects that emerge in the RF; their responses to continuous motion are smaller, slower, and cannot be predicted by signals elicited by stationary stimuli. In our hands, the alteration in signal dynamics induced by novel objects was more pronounced than edge enhancement and could be explained by priming of RF surround during continuous motion. These findings echo the salience of human visual perception and demonstrate an unappreciated capacity of the center-surround architecture to facilitate novel object detection and dynamic signal representation. Center-surround receptive fields are typically considered to mediate edge detection. Here, by studying retinal bipolar cells responding to flashed and moving stimuli, the authors reveal an additional function: enhanced representation of newly appearing visual items.
Collapse
Affiliation(s)
- John A Gaynes
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Samuel A Budoff
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Michael J Grybko
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Joshua B Hunt
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Alon Poleg-Polsky
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA.
| |
Collapse
|
15
|
In-sensor image memorization and encoding via optical neurons for bio-stimulus domain reduction toward visual cognitive processing. Nat Commun 2022; 13:5223. [PMID: 36064944 PMCID: PMC9445171 DOI: 10.1038/s41467-022-32790-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 08/15/2022] [Indexed: 12/03/2022] Open
Abstract
As machine vision technology generates large amounts of data from sensors, it requires efficient computational systems for visual cognitive processing. Recently, in-sensor computing systems have emerged as a potential solution for reducing unnecessary data transfer and realizing fast and energy-efficient visual cognitive processing. However, they still lack the capability to process stored images directly within the sensor. Here, we demonstrate a heterogeneously integrated 1-photodiode and 1 memristor (1P-1R) crossbar for in-sensor visual cognitive processing, emulating a mammalian image encoding process to extract features from the input images. Unlike other neuromorphic vision processes, the trained weight values are applied as an input voltage to the image-saved crossbar array instead of storing the weight value in the memristors, realizing the in-sensor computing paradigm. We believe the heterogeneously integrated in-sensor computing platform provides an advanced architecture for real-time and data-intensive machine-vision applications via bio-stimulus domain reduction. Designing in-sensor computing systems remains a challenge. Here, the authors demonstrate artificial optical neurons based on the in-sensor computing architecture that fuses sensory and computing nodes into a single platform capable of reducing data transfer time and energy for encoding and classification.
Collapse
|
16
|
DePiero VJ, Borghuis BG. Phase advancing is a common property of multiple neuron classes in the mouse retina. eNeuro 2022; 9:ENEURO.0270-22.2022. [PMID: 35995559 PMCID: PMC9450563 DOI: 10.1523/eneuro.0270-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 11/21/2022] Open
Abstract
Behavioral interactions with moving objects are challenged by response latencies within the sensory and motor nervous systems. In vision, the combined latency from phototransduction and synaptic transmission from the retina to central visual areas amounts to 50-100 ms, depending on stimulus conditions. Time required for generating appropriate motor output adds to this latency and further compounds the behavioral delay. Neuronal adaptations that help counter sensory latency within the retina have been demonstrated in some species, but how general these specializations are, and where in the circuitry they originate, remains unclear. To address this, we studied the timing of object motion-evoked responses at multiple signaling stages within the mouse retina using two-photon fluorescence calcium and glutamate imaging, targeted whole-cell electrophysiology, and computational modeling. We found that both ON and OFF-type ganglion cells, as well as the bipolar cells that innervate them, temporally advance the position encoding of a moving object and so help counter the inherent signaling delay in the retina. Model simulations show that this predictive capability is a direct consequence of the spatial extent of the cells' linear visual receptive field, with no apparent specialized circuits that help predict beyond it.Significance StatementSignal transduction and synaptic transmission within sensory signaling pathways costs time. Not a lot of time, just tens to a few hundred milliseconds depending on the sensory system, but enough to challenge fast behavioral interactions under dynamic stimulus conditions, like catching a moving fly. To counter neuronal delays, nervous systems of many species use anticipatory mechanisms. One such mechanism in the mammalian visual system helps predict the future position of a moving target through a process called phase advancing. Here we ask for functionally diverse neuron populations in the mouse retina how common is phase advancing and demonstrate that it is common and generated at multiple signaling stages.
Collapse
Affiliation(s)
- Victor J DePiero
- Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine, Louisville, KY 40202, USA
- Department of Biology, University of Virginia, Charlottesville, VA 22904, USA
| | - Bart G Borghuis
- Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine, Louisville, KY 40202, USA
| |
Collapse
|
17
|
Rezeanu D, Neitz M, Neitz J. How We See Black and White: The Role of Midget Ganglion Cells. Front Neuroanat 2022; 16:944762. [PMID: 35864822 PMCID: PMC9294633 DOI: 10.3389/fnana.2022.944762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 06/17/2022] [Indexed: 11/13/2022] Open
Abstract
According to classical opponent color theory, hue sensations are mediated by spectrally opponent neurons that are excited by some wavelengths of light and inhibited by others, while black-and-white sensations are mediated by spectrally non-opponent neurons that respond with the same sign to all wavelengths. However, careful consideration of the morphology and physiology of spectrally opponent L vs. M midget retinal ganglion cells (RGCs) in the primate retina indicates that they are ideally suited to mediate black-and-white sensations and poorly suited to mediate color. Here we present a computational model that demonstrates how the cortex could use unsupervised learning to efficiently separate the signals from L vs. M midget RGCs into distinct signals for black and white based only correlation of activity over time. The model also reveals why it is unlikely that these same ganglion cells could simultaneously mediate our perception of red and green, and shows how, in theory, a separate small population of midget RGCs with input from S, M, and L cones would be ideally suited to mediating hue perception.
Collapse
Affiliation(s)
| | | | - Jay Neitz
- Department of Ophthalmology, University of Washington, Seattle, WA, United States
| |
Collapse
|
18
|
Manookin MB. Neuroscience: Reliable and refined motion computations in the retina. Curr Biol 2022; 32:R474-R476. [PMID: 35609547 DOI: 10.1016/j.cub.2022.04.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We can distinguish between the direction and speed of a moving object effortlessly, but this is actually a very challenging computational task. A new study demonstrates that this process begins at the first stages of visual processing in the retina.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, WA 98109, USA; Vision Science Center, University of Washington, Seattle, WA 98109, USA; Karalis Johnson Eye Center, University of Washington, Seattle, WA 98109, USA.
| |
Collapse
|
19
|
Zapp SJ, Nitsche S, Gollisch T. Retinal receptive-field substructure: scaffolding for coding and computation. Trends Neurosci 2022; 45:430-445. [DOI: 10.1016/j.tins.2022.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/29/2022]
|
20
|
Unraveling the Neural Mechanisms Which Encode Rapid Streams of Visual Input. J Neurosci 2022; 42:1170-1172. [PMID: 35173038 DOI: 10.1523/jneurosci.2013-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 12/02/2021] [Accepted: 12/10/2021] [Indexed: 11/21/2022] Open
|
21
|
Sennesh E, Theriault J, Brooks D, van de Meent JW, Barrett LF, Quigley KS. Interoception as modeling, allostasis as control. Biol Psychol 2021; 167:108242. [PMID: 34942287 DOI: 10.1016/j.biopsycho.2021.108242] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 12/13/2021] [Accepted: 12/14/2021] [Indexed: 01/09/2023]
Abstract
The brain regulates the body by anticipating its needs and attempting to meet them before they arise - a process called allostasis. Allostasis requires a model of the changing sensory conditions within the body, a process called interoception. In this paper, we examine how interoception may provide performance feedback for allostasis. We suggest studying allostasis in terms of control theory, reviewing control theory's applications to related issues in physiology, motor control, and decision making. We synthesize these by relating them to the important properties of allostatic regulation as a control problem. We then sketch a novel formalism for how the brain might perform allostatic control of the viscera by analogy to skeletomotor control, including a mathematical view on how interoception acts as performance feedback for allostasis. Finally, we suggest ways to test implications of our hypotheses.
Collapse
Affiliation(s)
- Eli Sennesh
- Northeastern University, Boston, MA , United States.
| | | | - Dana Brooks
- Northeastern University, Boston, MA , United States
| | | | | | | |
Collapse
|