1
|
Moore JJ, Genkin A, Tournoy M, Pughe-Sanford JL, de Ruyter van Steveninck RR, Chklovskii DB. The neuron as a direct data-driven controller. Proc Natl Acad Sci U S A 2024; 121:e2311893121. [PMID: 38913890 DOI: 10.1073/pnas.2311893121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 04/12/2024] [Indexed: 06/26/2024] Open
Abstract
In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Alexander Genkin
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Magnus Tournoy
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | | | | | - Dmitri B Chklovskii
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| |
Collapse
|
2
|
Feuerriegel D. Adaptation in the visual system: Networked fatigue or suppressed prediction error signalling? Cortex 2024; 177:302-320. [PMID: 38905873 DOI: 10.1016/j.cortex.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/10/2024] [Accepted: 06/04/2024] [Indexed: 06/23/2024]
Abstract
Our brains are constantly adapting to changes in our visual environments. Neural adaptation exerts a persistent influence on the activity of sensory neurons and our perceptual experience, however there is a lack of consensus regarding how adaptation is implemented in the visual system. One account describes fatigue-based mechanisms embedded within local networks of stimulus-selective neurons (networked fatigue models). Another depicts adaptation as a product of stimulus expectations (predictive coding models). In this review, I evaluate neuroimaging and psychophysical evidence that poses fundamental problems for predictive coding models of neural adaptation. Specifically, I discuss observations of distinct repetition and expectation effects, as well as incorrect predictions of repulsive adaptation aftereffects made by predictive coding accounts. Based on this evidence, I argue that networked fatigue models provide a more parsimonious account of adaptation effects in the visual system. Although stimulus expectations can be formed based on recent stimulation history, any consequences of these expectations are likely to co-occur (or interact) with effects of fatigue-based adaptation. I conclude by proposing novel, testable hypotheses relating to interactions between fatigue-based adaptation and other predictive processes, focusing on stimulus feature extrapolation phenomena.
Collapse
Affiliation(s)
- Daniel Feuerriegel
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia.
| |
Collapse
|
3
|
Aitken K, Campagnola L, Garrett ME, Olsen SR, Mihalas S. Simple synaptic modulations implement diverse novelty computations. Cell Rep 2024; 43:114188. [PMID: 38713584 DOI: 10.1016/j.celrep.2024.114188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 02/09/2024] [Accepted: 04/17/2024] [Indexed: 05/09/2024] Open
Abstract
Detecting novelty is ethologically useful for an organism's survival. Recent experiments characterize how different types of novelty over timescales from seconds to weeks are reflected in the activity of excitatory and inhibitory neuron types. Here, we introduce a learning mechanism, familiarity-modulated synapses (FMSs), consisting of multiplicative modulations dependent on presynaptic or pre/postsynaptic neuron activity. With FMSs, network responses that encode novelty emerge under unsupervised continual learning and minimal connectivity constraints. Implementing FMSs within an experimentally constrained model of a visual cortical circuit, we demonstrate the generalizability of FMSs by simultaneously fitting absolute, contextual, and omission novelty effects. Our model also reproduces functional diversity within cell subpopulations, leading to experimentally testable predictions about connectivity and synaptic dynamics that can produce both population-level novelty responses and heterogeneous individual neuron signals. Altogether, our findings demonstrate how simple plasticity mechanisms within a cortical circuit structure can produce qualitatively distinct and complex novelty responses.
Collapse
Affiliation(s)
- Kyle Aitken
- Center for Data-Driven Discovery for Biology, Allen Institute, Seattle, WA 98109, USA.
| | | | | | - Shawn R Olsen
- Allen Institute for Neural Dynamics, Seattle, WA 98109, USA
| | - Stefan Mihalas
- Center for Data-Driven Discovery for Biology, Allen Institute, Seattle, WA 98109, USA; Applied Mathematics, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
4
|
Ma AC, Cameron AD, Wiener M. Memorability shapes perceived time (and vice versa). Nat Hum Behav 2024:10.1038/s41562-024-01863-2. [PMID: 38649460 DOI: 10.1038/s41562-024-01863-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 03/13/2024] [Indexed: 04/25/2024]
Abstract
Visual stimuli are known to vary in their perceived duration. Some visual stimuli are also known to linger for longer in memory. Yet, whether these two features of visual processing are linked is unknown. Despite early assumptions that time is an extracted or higher-order feature of perception, more recent work over the past two decades has demonstrated that timing may be instantiated within sensory modality circuits. A primary location for many of these studies is the visual system, where duration-sensitive responses have been demonstrated. Furthermore, visual stimulus features have been observed to shift perceived duration. These findings suggest that visual circuits mediate or construct perceived time. Here we present evidence across a series of experiments that perceived time is affected by the image properties of scene size, clutter and memorability. More specifically, we observe that scene size and memorability dilate time, whereas clutter contracts it. Furthermore, the durations of more memorable images are also perceived more precisely. Conversely, the longer the perceived duration of an image, the more memorable it is. To explain these findings, we applied a recurrent convolutional neural network model of the ventral visual system, in which images are progressively processed over time. We find that more memorable images are processed faster, and that this increase in processing speed predicts both the lengthening and the increased precision of perceived durations. These findings provide evidence for a link between image features, time perception and memory that can be further explored with models of visual processing.
Collapse
Affiliation(s)
- Alex C Ma
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Ayana D Cameron
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Martin Wiener
- Department of Psychology, George Mason University, Fairfax, VA, USA.
| |
Collapse
|
5
|
Wessels M, Oberfeld D. A binary acceleration signal reduces overestimation in pedestrians' visual time-to-collision estimation for accelerating vehicles. Heliyon 2024; 10:e27483. [PMID: 38496889 PMCID: PMC10944229 DOI: 10.1016/j.heliyon.2024.e27483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 02/19/2024] [Accepted: 02/29/2024] [Indexed: 03/19/2024] Open
Abstract
When a pedestrian intends to cross the street, it is essential for safe mobility to correctly estimate the arrival time (time-to-collision, TTC) of an approaching vehicle. However, visual perception of acceleration is rather imprecise. Previous studies consistently showed that humans (mostly) disregard acceleration, but judge the TTC for an object as if it were traveling at constant speed (first-order estimation), which is associated with overestimated TTCs for positively accelerating objects. In a traffic context, such TTC overestimation could motivate pedestrians to cross in front of an approaching vehicle, although the time remaining is not sufficiently long. Can a simple acceleration signal help improve visual TTC estimation for accelerating objects? The present study investigated whether a signal that only indicates whether a vehicle is accelerating or not can remove the first-order pattern of overestimated TTCs. In a virtual reality simulation, 26 participants estimated the TTC of vehicles that approached with constant velocity or accelerated, from the perspective of a pedestrian at the curb. In half of the experimental blocks, a light band on the windshield illuminated whenever the vehicle accelerated but remained deactivated when the vehicle travelled at a constant speed. In the other blocks, the light band never illuminated, regardless of whether or not the vehicle accelerated. Participants were informed about the light band function in each block. Without acceleration signal, the estimated TTCs for the accelerating vehicles were consistent with an erroneous first-order approximation. In blocks with acceleration signal, participants substantially changed their estimation strategy, so that TTC overestimations for accelerating vehicles were reduced. Our data suggest that a binary acceleration signal helps pedestrians to effectively reduce the TTC overestimation for accelerating vehicles and could therefore increase pedestrian safety.
Collapse
Affiliation(s)
- Marlene Wessels
- Institute of Psychology, Section Experimental Psychology, Johannes Gutenberg-Universität Mainz, Wallstrasse 3, 55122, Mainz, Germany
| | - Daniel Oberfeld
- Institute of Psychology, Section Experimental Psychology, Johannes Gutenberg-Universität Mainz, Wallstrasse 3, 55122, Mainz, Germany
| |
Collapse
|
6
|
Hansel C. Contiguity in perception: origins in cellular associative computations. Trends Neurosci 2024; 47:170-180. [PMID: 38310022 PMCID: PMC10939850 DOI: 10.1016/j.tins.2024.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/30/2023] [Accepted: 01/05/2024] [Indexed: 02/05/2024]
Abstract
Our brains are good at detecting and learning associative structures; according to some linguistic theories, this capacity even constitutes a prerequisite for the development of syntax and compositionality in language and verbalized thought. I will argue that the search for associative motifs in input patterns is an evolutionary old brain function that enables contiguity in sensory perception and orientation in time and space. It has its origins in an elementary material property of cells that is particularly evident at chemical synapses: input-assigned calcium influx that activates calcium sensor proteins involved in memory storage. This machinery for the detection and learning of associative motifs generates knowledge about input relationships and integrates this knowledge into existing networks through updates in connectivity patterns.
Collapse
Affiliation(s)
- Christian Hansel
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| |
Collapse
|
7
|
Wögerbauer EM, Hecht H, Wessels M. Camera-Monitor Systems as An Opportunity to Compensate for Perceptual Errors in Time-to-Contact Estimations. Vision (Basel) 2023; 7:65. [PMID: 37873893 PMCID: PMC10594519 DOI: 10.3390/vision7040065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/19/2023] [Accepted: 10/05/2023] [Indexed: 10/25/2023] Open
Abstract
For the safety of road traffic, it is crucial to accurately estimate the time it will take for a moving object to reach a specific location (time-to-contact estimation, TTC). Observers make more or less accurate TTC estimates of objects of average size that are moving at constant speeds. However, they make perceptual errors when judging objects which accelerate or which are unusually large or small. In the former case, for instance, when asked to extrapolate the motion of an accelerating object, observers tend to assume that the object continues to move with the speed it had before it went out of sight. In the latter case, the TTC of large objects is underestimated, whereas the TTC of small objects is overestimated, as if physical size is confounded with retinal size (the size-arrival effect). In normal viewing, these perceptual errors cannot be helped, but camera-monitor systems offer the unique opportunity to exploit the size-arrival effect to cancel out errors induced by the failure to respond to acceleration. To explore whether such error cancellation can work in principle, we conducted two experiments using a prediction-motion paradigm in which the size of the approaching vehicle was manipulated. The results demonstrate that altering the vehicle's size had the expected influence on the TTC estimation. This finding has practical implications for the implementation of camera-monitor systems.
Collapse
|
8
|
Manookin MB, Rieke F. Two Sides of the Same Coin: Efficient and Predictive Neural Coding. Annu Rev Vis Sci 2023; 9:293-311. [PMID: 37220331 DOI: 10.1146/annurev-vision-112122-020941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Some visual properties are consistent across a wide range of environments, while other properties are more labile. The efficient coding hypothesis states that many of these regularities in the environment can be discarded from neural representations, thus allocating more of the brain's dynamic range to properties that are likely to vary. This paradigm is less clear about how the visual system prioritizes different pieces of information that vary across visual environments. One solution is to prioritize information that can be used to predict future events, particularly those that guide behavior. The relationship between the efficient coding and future prediction paradigms is an area of active investigation. In this review, we argue that these paradigms are complementary and often act on distinct components of the visual input. We also discuss how normative approaches to efficient coding and future prediction can be integrated.
Collapse
Affiliation(s)
- Michael B Manookin
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
- Karalis Johnson Retina Center, University of Washington, Seattle, Washington, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, USA;
- Vision Science Center, University of Washington, Seattle, Washington, USA
| |
Collapse
|
9
|
Humans Can Track But Fail to Predict Accelerating Objects. eNeuro 2022; 9:ENEURO.0185-22.2022. [PMID: 36635938 PMCID: PMC9469915 DOI: 10.1523/eneuro.0185-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 07/10/2022] [Accepted: 07/27/2022] [Indexed: 02/02/2023] Open
Abstract
Objects in our visual environment often move unpredictably and can suddenly speed up or slow down. The ability to account for acceleration when interacting with moving objects can be critical for survival. Here, we investigate how human observers track an accelerating target with their eyes and predict its time of reappearance after a temporal occlusion by making an interceptive hand movement. Before occlusion, observers smoothly tracked the accelerating target with their eyes. At the time of occlusion, observers made a predictive saccade to the location where they subsequently intercepted the target with a quick pointing movement. We tested how observers integrated target motion information by comparing three alternative models that describe time-to-contact (TTC) based on the (1) final target velocity sample before occlusion, (2) average target velocity before occlusion, or (3) final target velocity and the rate of target acceleration. We show that observers were able to accurately track the accelerating target with visually-guided smooth pursuit eye movements. However, the timing of the predictive saccade and manual interception revealed inability to act on target acceleration when predicting TTC. Instead, interception timing was best described by the final velocity model that relies on extrapolating the last available target velocity sample before occlusion. Moreover, predictive saccades and manual interception showed similar insensitivity to target acceleration and were correlated on a trial-by-trial basis. These findings provide compelling evidence for the failure of integrating target acceleration into predictive models of target motion that drive both interceptive eye and hand movements.
Collapse
|
10
|
Wessels M, Zähme C, Oberfeld D. Auditory Information Improves Time-to-collision Estimation for Accelerating Vehicles. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03375-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractTo cross a road safely, pedestrians estimate the time remaining until an approaching vehicle arrives at their location (time-to-collision, TTC). For visually presented accelerated objects, however, TTC estimates are known to show a first-order pattern indicating that acceleration is not adequately considered. We investigated whether added vehicle sound can reduce these estimation errors. Twenty-five participants estimated the TTC of vehicles approaching with constant velocity or accelerating, from a pedestrian’s perspective at the curb in a traffic simulation. For visually-only presented accelerating vehicles, the TTC estimates showed the expected first-order pattern and thus large estimation errors. With added vehicle sound, the first-order pattern was largely removed, and TTC estimates were significantly more accurate compared to the visual-only presentation. For constant velocities, TTC estimates in both presentation conditions were predominantly accurate. Taken together, the sound of an accelerating vehicle can compensate for erroneous visual TTC estimates presumably by promoting the consideration of acceleration.
Collapse
|
11
|
Cappotto D, Kang H, Li K, Melloni L, Schnupp J, Auksztulewicz R. Simultaneous mnemonic and predictive representations in the auditory cortex. Curr Biol 2022; 32:2548-2555.e5. [PMID: 35487221 DOI: 10.1016/j.cub.2022.04.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 03/03/2022] [Accepted: 04/08/2022] [Indexed: 11/26/2022]
Abstract
Recent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations.1-4. However, memories of previous stimuli can also be used to form sensory predictions about upcoming stimuli.5,6 Predictive mechanisms allow the brain to create a probable model of the outside world, which can be updated when errors are detected between the model predictions and external inputs. 7-10 Direct recordings in the auditory cortex of awake mice established neural mechanisms for how encoding mechanisms might handle working memory and predictive processes without "overwriting" recent sensory events in instances where predictive mechanisms are triggered by oddballs within a sequence.11 However, it remains unclear whether mnemonic and predictive information can be decoded from cortical activity simultaneously during passive, implicit sequence processing, even in anesthetized models. Here, we recorded neural activity elicited by repeated stimulus sequences using electrocorticography (ECoG) in the auditory cortex of anesthetized rats, where events within the sequence (referred to henceforth as "vowels," for simplicity) were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulses, at overlapping latencies but based on independent and uncorrelated data features. We also demonstrate that predictive representations are dynamically updated over the course of stimulation.
Collapse
Affiliation(s)
- Drew Cappotto
- Department of Neuroscience, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong Kong.
| | - HiJee Kang
- Department of Neuroscience, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong Kong
| | - Kongyan Li
- Department of Neuroscience, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong Kong
| | - Lucia Melloni
- Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322 Frankfurt am Main, Germany
| | - Jan Schnupp
- Department of Neuroscience, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong Kong
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, 31 To Yuen Street, Kowloon Tong, Hong Kong; Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322 Frankfurt am Main, Germany; European Neuroscience Institute Göttingen: A Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstraße 5, 37077 Göttingen, Germany
| |
Collapse
|