1
|
Fu Q. Motion perception based on ON/OFF channels: A survey. Neural Netw 2023; 165:1-18. [PMID: 37263088 DOI: 10.1016/j.neunet.2023.05.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 04/02/2023] [Accepted: 05/17/2023] [Indexed: 06/03/2023]
Abstract
Motion perception is an essential ability for animals and artificially intelligent systems interacting effectively, safely with surrounding objects and environments. Biological visual systems, that have naturally evolved over hundreds-million years, are quite efficient and robust for motion perception, whereas artificial vision systems are far from such capability. This paper argues that the gap can be significantly reduced by formulation of ON/OFF channels in motion perception models encoding luminance increment (ON) and decrement (OFF) responses within receptive field, separately. Such signal-bifurcating structure has been found in neural systems of many animal species articulating early motion is split and processed in segregated pathways. However, the corresponding biological substrates, and the necessity for artificial vision systems have never been elucidated together, leaving concerns on uniqueness and advantages of ON/OFF channels upon building dynamic vision systems to address real world challenges. This paper highlights the importance of ON/OFF channels in motion perception through surveying current progress covering both neuroscience and computationally modelling works with applications. Compared to related literature, this paper for the first time provides insights into implementation of different selectivity to directional motion of looming, translating, and small-sized target movement based on ON/OFF channels in keeping with soundness and robustness of biological principles. Existing challenges and future trends of such bio-plausible computational structure for visual perception in connection with hotspots of machine learning, advanced vision sensors like event-driven camera finally are discussed.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
2
|
White PA. Time marking in perception. Neurosci Biobehav Rev 2023; 146:105043. [PMID: 36642288 DOI: 10.1016/j.neubiorev.2023.105043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 12/21/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Several authors have proposed that perceptual information carries labels that identify temporal features, including time of occurrence, ordinal temporal relations, and brief durations. These labels serve to locate and organise perceptual objects, features, and events in time. In some proposals time marking has local, specific functions such as synchronisation of different features in perceptual processing. In other proposals time marking has general significance and is responsible for rendering perceptual experience temporally coherent, just as various forms of spatial information render the visual environment spatially coherent. These proposals, which all concern time marking on the millisecond time scale, are reviewed. It is concluded that time marking is vital to the construction of a multisensory perceptual world in which things are orderly with respect to both space and time, but that much more research is needed to ascertain its functions in perception and its neurophysiological foundations.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3YG, Wales, UK.
| |
Collapse
|
3
|
An Artificial Visual System for Motion Direction Detection Based on the Hassenstein–Reichardt Correlator Model. ELECTRONICS 2022. [DOI: 10.3390/electronics11091423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The perception of motion direction is essential for the survival of visual animals. Despite various theoretical and biophysical investigations that have been conducted to elucidate directional selectivity at the neural level, the systemic mechanism of motion direction detection remains elusive. Here, we develop an artificial visual system (AVS) based on the core computation of the Hassenstein–Reichardt correlator (HRC) model for global motion direction detection. With reference to the biological investigations of Drosophila, we first describe a local motion-sensitive, directionally detective neuron that only responds to ON motion signals with high pattern contrast in a particular direction. Then, we use the full-neurons scheme motion direction detection mechanism to detect the global motion direction based on our previous research. The mechanism enables our AVS to detect multiple directions in a two-dimensional view, and the global motion direction is inferred from the outputs of all local motion-sensitive directionally detective neurons. To verify the reliability of our AVS, we conduct a series of experiments and compare its performance with the time-considered convolution neural network (CNN) and the EfficientNetB0 under the same conditions. The experimental results demonstrated that our system is reliable in detecting the direction of motion, and among the three models, our AVS has better motion direction detection capabilities.
Collapse
|
4
|
Luan H, Fu Q, Zhang Y, Hua M, Chen S, Yue S. A Looming Spatial Localization Neural Network Inspired by MLG1 Neurons in the Crab Neohelice. Front Neurosci 2022; 15:787256. [PMID: 35126038 PMCID: PMC8814358 DOI: 10.3389/fnins.2021.787256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s' receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.
Collapse
Affiliation(s)
- Hao Luan
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Computational Intelligence Laboratory (CIL), School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Yicheng Zhang
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Mu Hua
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
| | - Shengyong Chen
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Shigang Yue
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, China
- Computational Intelligence Laboratory (CIL), School of Computer Science, University of Lincoln, Lincoln, United Kingdom
- *Correspondence: Shigang Yue
| |
Collapse
|
5
|
Adibi M, Zoccolan D, Clifford CWG. Editorial: Sensory Adaptation. Front Syst Neurosci 2021; 15:809000. [PMID: 34955772 PMCID: PMC8692977 DOI: 10.3389/fnsys.2021.809000] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 11/22/2021] [Indexed: 11/17/2022] Open
Affiliation(s)
- Mehdi Adibi
- Neurodigit Lab, Department of Physiology, Biomedicine Discovery Institute, Monash University, Clayton, VIC, Australia
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Colin W G Clifford
- School of Psychology, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
6
|
White PA. Perception of Happening: How the Brain Deals with the No-History Problem. Cogn Sci 2021; 45:e13068. [PMID: 34865252 DOI: 10.1111/cogs.13068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 09/01/2021] [Accepted: 11/04/2021] [Indexed: 11/30/2022]
Abstract
In physics, the temporal dimension has units of infinitesimally brief duration. Given this, how is it possible to perceive things, such as motion, music, and vibrotactile stimulation, that involve extension across many units of time? To address this problem, it is proposed that there is what is termed an "information construct of happening" (ICOH), a simultaneous representation of recent, temporally differentiated perceptual information on the millisecond time scale. The main features of the ICOH are (i) time marking, semantic labeling of all information in the ICOH with ordinal temporal information and distance from what is informationally identified as the present moment, (ii) vector informational features that specify kind, direction, and rate of change for every feature in a percept, and (iii) connectives, information relating vector informational features at adjacent temporal locations in the ICOH. The ICOH integrates products of perceptual processing with recent historical information in sensory memory on the subsecond time scale. Perceptual information about happening in informational sensory memory is encoded in semantic form that preserves connected semantic trails of vector and timing information. The basic properties of the ICOH must be supported by a general and widespread timing mechanism that generates ordinal and interval timing information and it is suggested that state-dependent networks may suffice for that purpose. Happening, therefore, is perceived at a moment and is constituted by an information structure of connected recent historical information.
Collapse
|
7
|
Kohn JR, Portes JP, Christenson MP, Abbott LF, Behnia R. Flexible filtering by neural inputs supports motion computation across states and stimuli. Curr Biol 2021; 31:5249-5260.e5. [PMID: 34670114 DOI: 10.1016/j.cub.2021.09.061] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 08/10/2021] [Accepted: 09/22/2021] [Indexed: 01/05/2023]
Abstract
Sensory systems flexibly adapt their processing properties across a wide range of environmental and behavioral conditions. Such variable processing complicates attempts to extract a mechanistic understanding of sensory computations. This is evident in the highly constrained, canonical Drosophila motion detection circuit, where the core computation underlying direction selectivity is still debated despite extensive studies. Here we measured the filtering properties of neural inputs to the OFF motion-detecting T5 cell in Drosophila. We report state- and stimulus-dependent changes in the shape of these signals, which become more biphasic under specific conditions. Summing these inputs within the framework of a connectomic-constrained model of the circuit demonstrates that these shapes are sufficient to explain T5 responses to various motion stimuli. Thus, our stimulus- and state-dependent measurements reconcile motion computation with the anatomy of the circuit. These findings provide a clear example of how a basic circuit supports flexible sensory computation.
Collapse
Affiliation(s)
- Jessica R Kohn
- The Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Jacob P Portes
- The Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Matthias P Christenson
- The Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - L F Abbott
- The Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Rudy Behnia
- The Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
8
|
Toffoli L, Scerif G, Snowling MJ, Norcia AM, Manning C. Global motion evoked potentials in autistic and dyslexic children: A cross-syndrome approach. Cortex 2021; 143:109-126. [PMID: 34399308 PMCID: PMC8500218 DOI: 10.1016/j.cortex.2021.06.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 02/09/2021] [Accepted: 06/17/2021] [Indexed: 11/26/2022]
Abstract
Atypicalities in psychophysical thresholds for global motion processing have been reported in many neurodevelopmental conditions, including autism and dyslexia. Cross-syndrome comparisons of neural dynamics may help determine whether altered motion processing is a general marker of atypical development or condition-specific. Here, we assessed group differences in N2 peak amplitude (previously proposed as a marker of motion-specific processing) in typically developing (n = 57), autistic (n = 29) and dyslexic children (n = 44) aged 6-14 years, in two global motion tasks. High-density EEG data were collected while children judged the direction of global motion stimuli as quickly and accurately as possible, following a period of random motion. Using a data-driven component decomposition technique, we identified a reliable component that was maximal over occipital electrodes and had an N2-like peak at ~160 msec. We found no group differences in N2 peak amplitude, in either task. However, for both autistic and dyslexic children, there was evidence of atypicalities in later stages of processing that require follow up in future research. Our results suggest that early sensory encoding of motion information is unimpaired in dyslexic and autistic children. Group differences in later processing stages could reflect sustained global motion responses, decision-making, metacognitive processes and/or response generation, which may also distinguish between autistic and dyslexic individuals.
Collapse
Affiliation(s)
- Lisa Toffoli
- Department of Developmental Psychology and Socialisation, University of Padua, Padova, Italy
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Anthony M Norcia
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Catherine Manning
- Department of Experimental Psychology, University of Oxford, Oxford, UK; School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.
| |
Collapse
|
9
|
Kohl PL, Rutschmann B. Honey bees communicate distance via non-linear waggle duration functions. PeerJ 2021; 9:e11187. [PMID: 33868825 PMCID: PMC8029670 DOI: 10.7717/peerj.11187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/09/2021] [Indexed: 11/21/2022] Open
Abstract
Honey bees (genus Apis) can communicate the approximate location of a resource to their nestmates via the waggle dance. The distance to a goal is encoded by the duration of the waggle phase of the dance, but the precise shape of this distance-duration relationship is ambiguous: earlier studies (before the 1990s) proposed that it is non-linear, with the increase in waggle duration flattening with distance, while more recent studies suggested that it follows a simple linear function (i.e. a straight line). Strikingly, authors of earlier studies trained bees to much longer distances than authors of more recent studies, but unfortunately they usually measured the duration of dance circuits (waggle phase plus return phase of the dance), which is only a correlate of the bees’ distance signal. We trained honey bees (A. mellifera carnica) to visit sugar feeders over a relatively long array of distances between 0.1 and 1.7 km from the hive and measured the duration of both the waggle phase and the return phase of their dances from video recordings. The distance-related increase in waggle duration was better described by a non-linear model with a decreasing slope than by a simple linear model. The relationship was equally well captured by a model with two linear segments separated at a “break-point” at 1 km distance. In turn, the relationship between return phase duration and distance was sufficiently well described by a simple linear model. The data suggest that honey bees process flight distance differently before and beyond a certain threshold distance. While the physiological and evolutionary causes of this behavior remain to be explored, our results can be applied to improve the estimation of honey bee foraging distances based on the decoding of waggle dances.
Collapse
Affiliation(s)
- Patrick L Kohl
- Department of Animal Ecology and Tropical Biology, Biocenter, University of Würzburg, Würzburg, Germany
| | - Benjamin Rutschmann
- Department of Animal Ecology and Tropical Biology, Biocenter, University of Würzburg, Würzburg, Germany
| |
Collapse
|
10
|
Martínez-García M, Zhang Y, Gordon T. Memory Pattern Identification for Feedback Tracking Control in Human-Machine Systems. HUMAN FACTORS 2021; 63:210-226. [PMID: 31647885 DOI: 10.1177/0018720819881008] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE The aim of this paper was to identify the characteristics of memory patterns with respect to a visual input, perceived by the human operator during a manual control task, which consisted in following a moving target on a display with a cursor. BACKGROUND Manual control tasks involve nondeclarative memory. The memory encodings of different motor skills have been referred to as procedural memories. The procedural memories have a pattern, which this paper sought to identify for the particular case of a one-dimensional tracking task. Specifically, data recorded from human subjects controlling dynamic systems with different fractional order were investigated. METHOD A finite impulse response (FIR) controller was fitted to the data, and pattern analysis of the fitted parameters was performed. Then, the FIR model was further reduced to a lower order controller; from the simplified model, the stability analysis of the human-machine system in closed-loop was conducted. RESULTS It is shown that the FIR model can be used to identify and represent patterns in human procedural memories during manual control tasks. The obtained procedural memory pattern presents a time scale of about 650 ms before decay. Furthermore, the fitted controller is stable for systems with fractional order less than or equal to 1. CONCLUSION For systems of different fractional order, the proposed control scheme-based on an FIR model-can effectively characterize the linear properties of manual control in humans. APPLICATION This research supports a biofidelic approach to human manual control modeling over feedback visual perceptions. Relevant applications of this research are the following: the development of shared-control systems, where a virtual human model assists the human during a control task, and human operator state monitoring.
Collapse
|
11
|
Li L, Zhang Z, Lu J. Artificial fly visual joint perception neural network inspired by multiple-regional collision detection. Neural Netw 2020; 135:13-28. [PMID: 33338802 DOI: 10.1016/j.neunet.2020.11.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 11/12/2020] [Accepted: 11/30/2020] [Indexed: 10/22/2022]
Abstract
The biological visual system includes multiple types of motion sensitive neurons which preferentially respond to specific perceptual regions. However, it still keeps open how to borrow such neurons to construct bio-inspired computational models for multiple-regional collision detection. To fill this gap, this work proposes a visual joint perception neural network with two subnetworks - presynaptic and postsynaptic neural networks, inspired by the preferentialperception characteristics of three horizontal and vertical motion sensitive neurons. Related to the neural network and three hazard detection mechanisms, an artificial fly visual synthesized collision detection model for multiple-regional collision detection is originally developed to monitor possible danger occurrence in the case where one or more moving objects appear in the whole field of view. The experiments can clearly draw two conclusions: (i) the acquired neural network can effectively display the characteristics of visual movement, and (ii) the collision detection model, which outperforms the compared models, can effectively perform multiple-regional collision detection at a high success rate, and only takes about 0.24s to complete the process of collision detection for each virtual or actual image frame with resolution 110×60.
Collapse
Affiliation(s)
- Lun Li
- College of Big Data and Information Engineering, Guizhou University, Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, PR China
| | - Zhuhong Zhang
- College of Big Data and Information Engineering, Guizhou University, Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, PR China.
| | - Jiaxuan Lu
- College of Big Data and Information Engineering, Guizhou University, Guizhou Provincial Characteristic Key Laboratory of System Optimization and Scientific Computing, Guiyang, Guizhou 550025, PR China
| |
Collapse
|
12
|
Heitmann S, Ermentrout GB. Direction-selective motion discrimination by traveling waves in visual cortex. PLoS Comput Biol 2020; 16:e1008164. [PMID: 32877405 PMCID: PMC7467221 DOI: 10.1371/journal.pcbi.1008164] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 07/19/2020] [Indexed: 11/19/2022] Open
Abstract
The majority of neurons in primary visual cortex respond selectively to bars of light that have a specific orientation and move in a specific direction. The spatial and temporal responses of such neurons are non-separable. How neurons accomplish that computational feat without resort to explicit time delays is unknown. We propose a novel neural mechanism whereby visual cortex computes non-separable responses by generating endogenous traveling waves of neural activity that resonate with the space-time signature of the visual stimulus. The spatiotemporal characteristics of the response are defined by the local topology of excitatory and inhibitory lateral connections in the cortex. We simulated the interaction between endogenous traveling waves and the visual stimulus using spatially distributed populations of excitatory and inhibitory neurons with Wilson-Cowan dynamics and inhibitory-surround coupling. Our model reliably detected visual gratings that moved with a given speed and direction provided that we incorporated neural competition to suppress false motion signals in the opposite direction. The findings suggest that endogenous traveling waves in visual cortex can impart direction-selectivity on neural responses without resort to explicit time delays. They also suggest a functional role for motion opponency in eliminating false motion signals. It is well established that the so-called ‘simple cells’ of the primary visual cortex respond preferentially to oriented bars of light that move across the visual field with a particular speed and direction. The spatiotemporal responses of such neurons are said to be non-separable because they cannot be constructed from independent spatial and temporal neural mechanisms. Contemporary theories of how neurons compute non-separable responses typically rely on finely tuned transmission delays between signals from disparate regions of the visual field. However the existence of such delays is controversial. We propose an alternative neural mechanism for computing non-separable responses that does not require transmission delays. It instead relies on the predisposition of the cortical tissue to spontaneously generate spatiotemporal waves of neural activity that travel with a particular speed and direction. We propose that the endogenous wave activity resonates with the visual stimulus to elicit direction-selective neural responses to visual motion. We demonstrate the principle in computer models and show that competition between opposing neurons robustly enhances their ability to discriminate between visual gratings that move in opposite directions.
Collapse
Affiliation(s)
- Stewart Heitmann
- Victor Chang Cardiac Research Institute, Sydney, New South Wales, Australia
- * E-mail:
| | - G. Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pennsylvania, United Sates of America
| |
Collapse
|
13
|
Yildizoglu T, Riegler C, Fitzgerald JE, Portugues R. A Neural Representation of Naturalistic Motion-Guided Behavior in the Zebrafish Brain. Curr Biol 2020; 30:2321-2333.e6. [PMID: 32386533 DOI: 10.1016/j.cub.2020.04.043] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/13/2020] [Accepted: 04/20/2020] [Indexed: 11/20/2022]
Abstract
All animals must transform ambiguous sensory data into successful behavior. This requires sensory representations that accurately reflect the statistics of natural stimuli and behavior. Multiple studies show that visual motion processing is tuned for accuracy under naturalistic conditions, but the sensorimotor circuits extracting these cues and implementing motion-guided behavior remain unclear. Here we show that the larval zebrafish retina extracts a diversity of naturalistic motion cues, and the retinorecipient pretectum organizes these cues around the elements of behavior. We find that higher-order motion stimuli, gliders, induce optomotor behavior matching expectations from natural scene analyses. We then image activity of retinal ganglion cell terminals and pretectal neurons. The retina exhibits direction-selective responses across glider stimuli, and anatomically clustered pretectal neurons respond with magnitudes matching behavior. Peripheral computations thus reflect natural input statistics, whereas central brain activity precisely codes information needed for behavior. This general principle could organize sensorimotor transformations across animal species.
Collapse
Affiliation(s)
- Tugce Yildizoglu
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany
| | - Clemens Riegler
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Neurobiology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Ruben Portugues
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany; Institute of Neuroscience, Technical University of Munich, Munich 80802, Germany; Munich Cluster for Systems Neurology (SyNergy), Munich 80802, Germany.
| |
Collapse
|
14
|
Manning C, Kaneshiro B, Kohler PJ, Duta M, Scerif G, Norcia AM. Neural dynamics underlying coherent motion perception in children and adults. Dev Cogn Neurosci 2019; 38:100670. [PMID: 31228678 PMCID: PMC6688051 DOI: 10.1016/j.dcn.2019.100670] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 05/20/2019] [Accepted: 06/10/2019] [Indexed: 01/30/2023] Open
Abstract
Motion sensitivity increases during childhood, but little is known about the neural correlates. Most studies investigating children's evoked responses have not dissociated direction-specific and non-direction-specific responses. To isolate direction-specific responses, we presented coherently moving dot stimuli preceded by incoherent motion, to 6- to 7-year-olds (n = 34), 8- to 10-year-olds (n = 34), 10- to 12-year-olds (n = 34) and adults (n = 20). Participants reported the coherent motion direction while high-density EEG was recorded. Using a data-driven approach, we identified two stimulus-locked EEG components with distinct topographies: an early component with an occipital topography likely reflecting sensory encoding and a later, sustained positive component over centro-parietal electrodes that we attribute to decision-related processes. The component waveforms showed clear age-related differences. In the early, occipital component, all groups showed a negativity peaking at ˜300 ms, like the previously reported coherent-motion N2. However, the children, unlike adults, showed an additional positive peak at ˜200 ms, suggesting differential stimulus encoding. The later positivity in the centro-parietal component rose more steeply for adults than for the youngest children, likely reflecting age-related speeding of decision-making. We conclude that children's protracted development of coherent motion sensitivity is associated with maturation of both early sensory and later decision-related processes.
Collapse
Affiliation(s)
- Catherine Manning
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK.
| | - Blair Kaneshiro
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, Stanford University, 2452 Watson Court, Palo Alto, CA, 94303, USA
| | - Peter J Kohler
- Department of Psychology, Stanford University, Jordan Hall, 450 Serra Mall, Stanford, CA, 94305, USA
| | - Mihaela Duta
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK
| | - Anthony M Norcia
- Department of Psychology, Stanford University, Jordan Hall, 450 Serra Mall, Stanford, CA, 94305, USA
| |
Collapse
|
15
|
Fu Q, Wang H, Hu C, Yue S. Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review. ARTIFICIAL LIFE 2019; 25:263-311. [PMID: 31397604 DOI: 10.1162/artl_a_00297] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.
Collapse
Affiliation(s)
- Qinbing Fu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Hongxin Wang
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Cheng Hu
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| | - Shigang Yue
- Guangzhou University, School of Mechanical and Electrical Engineering; Machine Life and Intelligence Research Centre
- University of Lincoln, Computational Intelligence Lab, School of Computer Science; Lincoln Centre for Autonomous Systems.
| |
Collapse
|
16
|
Cyr A, Thériault F, Ross M, Berberian N, Chartier S. Spiking Neurons Integrating Visual Stimuli Orientation and Direction Selectivity in a Robotic Context. Front Neurorobot 2018; 12:75. [PMID: 30524261 PMCID: PMC6256284 DOI: 10.3389/fnbot.2018.00075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 10/31/2018] [Indexed: 11/13/2022] Open
Abstract
Visual motion detection is essential for the survival of many species. The phenomenon includes several spatial properties, not fully understood at the level of a neural circuit. This paper proposes a computational model of a visual motion detector that integrates direction and orientation selectivity features. A recent experiment in the Drosophila model highlights that stimulus orientation influences the neural response of direction cells. However, this interaction and the significance at the behavioral level are currently unknown. As such, another objective of this article is to study the effect of merging these two visual processes when contextualized in a neuro-robotic model and an operant conditioning procedure. In this work, the learning task was solved using an artificial spiking neural network, acting as the brain controller for virtual and physical robots, showing a behavior modulation from the integration of both visual processes.
Collapse
Affiliation(s)
- André Cyr
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Frédéric Thériault
- Department of Computer Science, Cégep du Vieux Montréal, Montreal, QC, Canada
| | - Matthew Ross
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Nareg Berberian
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| | - Sylvain Chartier
- Conec Laboratory, School of Psychology, Ottawa University, Ottawa, ON, Canada
| |
Collapse
|
17
|
Nagy AJ, Takeuchi Y, Berényi A. Coding of self-motion-induced and self-independent visual motion in the rat dorsomedial striatum. PLoS Biol 2018; 16:e2004712. [PMID: 29939998 PMCID: PMC6034886 DOI: 10.1371/journal.pbio.2004712] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 07/06/2018] [Accepted: 06/11/2018] [Indexed: 11/21/2022] Open
Abstract
Evolutionary development of vision has provided us with the capacity to detect moving objects. Concordant shifts of visual features suggest movements of the observer, whereas discordant changes are more likely to be indicating independently moving objects, such as predators or prey. Such distinction helps us to focus attention, adapt our behavior, and adjust our motor patterns to meet behavioral challenges. However, the neural basis of distinguishing self-induced and self-independent visual motions is not clarified in unrestrained animals yet. In this study, we investigated the presence and origin of motion-related visual information in the striatum of rats, a hub of action selection and procedural memory. We found that while almost half of the neurons in the dorsomedial striatum are sensitive to visual motion congruent with locomotion (and that many of them also code for spatial location), only a small subset of them are composed of fast-firing interneurons that could also perceive self-independent visual stimuli. These latter cells receive their visual input at least partially from the secondary visual cortex (V2). This differential visual sensitivity may be an important support in adjusting behavior to salient environmental events. It emphasizes the importance of investigating visual motion perception in unrestrained animals.
Collapse
Affiliation(s)
- Anett J. Nagy
- MTA-SZTE “Momentum” Oscillatory Neuronal Networks Research Group, Department of Physiology, University of Szeged, Szeged, Hungary
| | - Yuichi Takeuchi
- MTA-SZTE “Momentum” Oscillatory Neuronal Networks Research Group, Department of Physiology, University of Szeged, Szeged, Hungary
| | - Antal Berényi
- MTA-SZTE “Momentum” Oscillatory Neuronal Networks Research Group, Department of Physiology, University of Szeged, Szeged, Hungary
- Neuroscience Institute, New York University, New York, New York, United States of America
| |
Collapse
|
18
|
Gravot CM, Knorr AG, Glasauer S, Straka H. It's not all black and white: visual scene parameters influence optokinetic reflex performance in Xenopus laevis tadpoles. ACTA ACUST UNITED AC 2018; 220:4213-4224. [PMID: 29141881 DOI: 10.1242/jeb.167700] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Accepted: 09/16/2017] [Indexed: 11/20/2022]
Abstract
The maintenance of visual acuity during active and passive body motion is ensured by gaze-stabilizing reflexes that aim at minimizing retinal image slip. For the optokinetic reflex (OKR), large-field visual motion of the surround forms the essential stimulus that activates eye movements. Properties of the moving visual world influence cognitive motion perception and the estimation of visual image velocity. Therefore, the performance of brainstem-mediated visuo-motor behaviors might also depend on image scene characteristics. Employing semi-intact preparations of mid-larval stages of Xenopus laevis tadpoles, we studied the influence of contrast polarity, intensity, contour shape and different motion stimulus patterns on the performance of the OKR and multi-unit optic nerve discharge during motion of a large-field visual scene. At high contrast intensities, the OKR amplitude was significantly larger for visual scenes with a positive contrast (bright dots on a dark background) compared with those with a negative contrast. This effect persisted for luminance-matched pairs of stimuli, and was independent of contour shape. The relative biases of OKR performance along with the independence of the responses from contour shape were closely matched by the optic nerve discharge evoked by the same visual stimuli. However, the multi-unit activity of retinal ganglion cells in response to a small single moving vertical edge was strongly influenced by the light intensity in the vertical neighborhood. This suggests that the underlying mechanism of OKR biases related to contrast polarity directly derives from visual motion-processing properties of the retinal circuitry.
Collapse
Affiliation(s)
- Céline M Gravot
- Department Biology II, Ludwig-Maximilians-Universität München, Großhaderner Str. 2, 82152 Planegg, Germany .,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Großhaderner Str. 2, 82152 Planegg, Germany
| | - Alexander G Knorr
- Center for Sensorimotor Research, Department of Neurology, University Hospital Munich, Feodor-Lynen-Str. 19, 81377 Munich, Germany.,Institute for Cognitive Systems, TUM Department of Electrical and Computer Engineering, Technical University of Munich, Karlstr. 45/II, 80333 Munich, Germany
| | - Stefan Glasauer
- Center for Sensorimotor Research, Department of Neurology, University Hospital Munich, Feodor-Lynen-Str. 19, 81377 Munich, Germany
| | - Hans Straka
- Department Biology II, Ludwig-Maximilians-Universität München, Großhaderner Str. 2, 82152 Planegg, Germany
| |
Collapse
|
19
|
Woo KL, Rieucau G, Burke D. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon. Curr Zool 2018; 63:75-84. [PMID: 29491965 PMCID: PMC5804146 DOI: 10.1093/cz/zow074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 06/20/2016] [Indexed: 11/12/2022] Open
Abstract
Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.
Collapse
Affiliation(s)
- Kevin L Woo
- SUNY Empire State College, Metropolitan Center, 325 Hudson Street, New York, NY 10013-1005, USADepartment of Biological Sciences, Florida International University, 3000 Northeast 151 St, North Miami, FL 33181, USA,School of Psychology, University of Newcastle, 10 Chittaway Road, Ourimbah, New South Wales, 2258, Australia
| | - Guillaume Rieucau
- SUNY Empire State College, Metropolitan Center, 325 Hudson Street, New York, NY 10013-1005, USADepartment of Biological Sciences, Florida International University, 3000 Northeast 151 St, North Miami, FL 33181, USA,School of Psychology, University of Newcastle, 10 Chittaway Road, Ourimbah, New South Wales, 2258, Australia
| | - Darren Burke
- SUNY Empire State College, Metropolitan Center, 325 Hudson Street, New York, NY 10013-1005, USADepartment of Biological Sciences, Florida International University, 3000 Northeast 151 St, North Miami, FL 33181, USA,School of Psychology, University of Newcastle, 10 Chittaway Road, Ourimbah, New South Wales, 2258, Australia
| |
Collapse
|
20
|
Jancke D. Catching the voltage gradient-asymmetric boost of cortical spread generates motion signals across visual cortex: a brief review with special thanks to Amiram Grinvald. NEUROPHOTONICS 2017; 4:031206. [PMID: 28217713 PMCID: PMC5301132 DOI: 10.1117/1.nph.4.3.031206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Accepted: 01/12/2017] [Indexed: 06/06/2023]
Abstract
Wide-field voltage imaging is unique in its capability to capture snapshots of activity-across the full gradient of average changes in membrane potentials from subthreshold to suprathreshold levels-of hundreds of thousands of superficial cortical neurons that are simultaneously active. Here, I highlight two examples where voltage-sensitive dye imaging (VSDI) was exploited to track gradual space-time changes of activity within milliseconds across several millimeters of cortex at submillimeter resolution: the line-motion condition, measured in Amiram Grinvald's Laboratory more than 10 years ago and-coming full circle running VSDI in my laboratory-another motion-inducing condition, in which two neighboring stimuli counterchange luminance simultaneously. In both examples, cortical spread is asymmetrically boosted, creating suprathreshold activity drawn out over primary visual cortex. These rapidly propagating waves may integrate brain signals that encode motion independent of direction-selective circuits.
Collapse
Affiliation(s)
- Dirk Jancke
- Ruhr University Bochum, Optical Imaging Group, Institut für Neuroinformatik, Bochum, Germany
| |
Collapse
|
21
|
Tarawneh G, Nityananda V, Rosner R, Errington S, Herbert W, Cumming BG, Read JCA, Serrano-Pedraza I. Invisible noise obscures visible signal in insect motion detection. Sci Rep 2017; 7:3496. [PMID: 28615659 PMCID: PMC5471215 DOI: 10.1038/s41598-017-03732-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 05/03/2017] [Indexed: 11/09/2022] Open
Abstract
The motion energy model is the standard account of motion detection in animals from beetles to humans. Despite this common basis, we show here that a difference in the early stages of visual processing between mammals and insects leads this model to make radically different behavioural predictions. In insects, early filtering is spatially lowpass, which makes the surprising prediction that motion detection can be impaired by "invisible" noise, i.e. noise at a spatial frequency that elicits no response when presented on its own as a signal. We confirm this prediction using the optomotor response of praying mantis Sphodromantis lineola. This does not occur in mammals, where spatially bandpass early filtering means that linear systems techniques, such as deriving channel sensitivity from masking functions, remain approximately valid. Counter-intuitive effects such as masking by invisible noise may occur in neural circuits wherever a nonlinearity is followed by a difference operation.
Collapse
Affiliation(s)
- Ghaith Tarawneh
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom.
| | - Vivek Nityananda
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Ronny Rosner
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Steven Errington
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - William Herbert
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bldg 49 Room 2A50, Bethesda, MD, 20892-4435, USA
| | - Jenny C A Read
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | | |
Collapse
|
22
|
Li J, Lindemann JP, Egelhaaf M. Peripheral Processing Facilitates Optic Flow-Based Depth Perception. Front Comput Neurosci 2016; 10:111. [PMID: 27818631 PMCID: PMC5073142 DOI: 10.3389/fncom.2016.00111] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 10/04/2016] [Indexed: 12/19/2022] Open
Abstract
Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements ("optic flow") during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions.
Collapse
Affiliation(s)
- Jinglin Li
- Department of Neurobiology and Center of Excellence Cognitive Interaction Technology, Bielefeld UniversityBielefeld, Germany
| | | | | |
Collapse
|
23
|
Leong JCS, Esch JJ, Poole B, Ganguli S, Clandinin TR. Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression. J Neurosci 2016; 36:8078-92. [PMID: 27488629 PMCID: PMC4971360 DOI: 10.1523/jneurosci.1272-16.2016] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2016] [Revised: 05/22/2016] [Accepted: 05/25/2016] [Indexed: 01/12/2023] Open
Abstract
UNLABELLED Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila, direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains unknown. We find that the receptive fields of both T4 and T5 exhibit spatiotemporally offset light-preferring and dark-preferring subfields, each obliquely oriented in spacetime. In a linear-nonlinear modeling framework, the spatiotemporal organization of the T5 receptive field predicts the activity of T5 in response to motion stimuli. These findings demonstrate that direction selectivity emerges from the enhancement of responses to motion in the preferred direction, as well as the suppression of responses to motion in the null direction. Thus, remarkably, T5 incorporates the essential algorithmic strategies used by the Hassenstein-Reichardt correlator and the Barlow-Levick detector. Our model for T5 also provides an algorithmic explanation for the selectivity of T5 for moving dark edges: our model captures all two- and three-point spacetime correlations relevant to motion in this stimulus class. More broadly, our findings reveal the contribution of input pathway visual processing, specifically center-surround, temporally biphasic receptive fields, to the generation of direction selectivity in T5. As the spatiotemporal receptive field of T5 in Drosophila is common to the simple cell in vertebrate visual cortex, our stimulus-response model of T5 will inform efforts in an experimentally tractable context to identify more detailed, mechanistic models of a prevalent computation. SIGNIFICANCE STATEMENT Feature selective neurons respond preferentially to astonishingly specific stimuli, providing the neurobiological basis for perception. Direction selectivity serves as a paradigmatic model of feature selectivity that has been examined in many species. While insect elementary motion detectors have served as premiere experimental models of direction selectivity for 60 years, the central question of their underlying algorithm remains unanswered. Using in vivo two-photon imaging of intracellular calcium signals, we measure the receptive fields of the first direction-selective cells in the Drosophila visual system, and define the algorithm used to compute the direction of motion. Computational modeling of these receptive fields predicts responses to motion and reveals how this circuit efficiently captures many useful correlations intrinsic to moving dark edges.
Collapse
Affiliation(s)
| | | | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California 94305
| | | |
Collapse
|
24
|
Propagating Cortical Waves May Underlie Illusory Motion Perception. J Neurosci 2016; 36:6854-6. [PMID: 27358444 DOI: 10.1523/jneurosci.1167-16.2016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Accepted: 05/23/2016] [Indexed: 11/21/2022] Open
|
25
|
Seidel Malkinson T, Pertzov Y, Zohary E. Turning Symbolic: The Representation of Motion Direction in Working Memory. Front Psychol 2016; 7:165. [PMID: 26909059 PMCID: PMC4754772 DOI: 10.3389/fpsyg.2016.00165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Accepted: 01/28/2016] [Indexed: 11/21/2022] Open
Abstract
What happens to the representation of a moving stimulus when it is no longer present and its motion direction has to be maintained in working memory (WM)? Is the initial, sensorial representation maintained during the delay period or is there another representation, at a higher level of abstraction? It is also feasible that multiple representations may co-exist in WM, manifesting different facets of sensory and more abstract features. To that end, we investigated the mnemonic representation of motion direction in a series of three psychophysical experiments, using a delayed motion-discrimination task (relative clockwise∖counter-clockwise judgment). First, we show that a change in the dots’ contrast polarity does not hamper performance. Next, we demonstrate that performance is unaffected by relocation of the Test stimulus in either retinotopic or spatiotopic coordinate frames. Finally, we show that an arrow-shaped cue presented during the delay interval between the Sample and Test stimulus, strongly biases performance toward the direction of the arrow, although the cue itself is non-informative (it has no predictive value of the correct answer). These results indicate that the representation of motion direction in WM could be independent of the physical features of the stimulus (polarity or position) and has non-sensorial abstract qualities. It is plausible that an abstract mnemonic trace might be activated alongside a more basic, analog representation of the stimulus. We speculate that the specific sensitivity of the mnemonic representation to the arrow-shaped symbol may stem from the long term learned association between direction and the hour in the clock.
Collapse
Affiliation(s)
- Tal Seidel Malkinson
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, Hebrew University of JerusalemJerusalem, Israel; Department of Psychology, Hebrew University of JerusalemJerusalem, Israel; Institut National de la Santé et de la Recherche Médicale U1127, Centre National de la Recherche Scientifique UMR 7225, UMR S 1127, Évaluation Physiologique chez les Sujets Sains et Atteints de Troubles Cognitifs (PICNIC Lab), Institut du Cerveau et de la Moelle Épinière, Sorbonne Universités, Université Pierre et Marie Curie-Paris 06Paris, France
| | - Yoni Pertzov
- Department of Psychology, Hebrew University of Jerusalem Jerusalem, Israel
| | - Ehud Zohary
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, Hebrew University of JerusalemJerusalem, Israel; The Edmond and Lily Safra Center for Brain Sciences, Hebrew University of JerusalemJerusalem, Israel
| |
Collapse
|
26
|
Cooper SA, O'Sullivan M. Here, there and everywhere: higher visual function and the dorsal visual stream. Pract Neurol 2016; 16:176-83. [PMID: 26786007 DOI: 10.1136/practneurol-2015-001168] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2015] [Indexed: 01/12/2023]
Abstract
The dorsal visual stream, often referred to as the 'where' stream, represents the pathway taken by visual information from the primary visual cortex to the posterior parietal lobe and onwards. It partners the ventral or 'what' stream, the subject of a previous review and largely a temporal-based system. Here, we consider the dorsal stream disorders of perception (simultanagnosia, akinetopsia) along with their consequences on action (eg, optic ataxia and oculomotor apraxia, along with Balint's syndrome). The role of the dorsal stream in blindsight and hemispatial neglect is also considered.
Collapse
Affiliation(s)
- Sarah Anne Cooper
- Department of Neurology, Hurstwood Park Neurological Centre, Princess Royal Hospital, Haywards Heath, UK
| | - Michael O'Sullivan
- Department of Basic and Clinical Neurosciences, Institute of Psychiatry Psychology & Neuroscience, King's College London, London, UK
| |
Collapse
|
27
|
A Class of Visual Neurons with Wide-Field Properties Is Required for Local Motion Detection. Curr Biol 2015; 25:3178-89. [PMID: 26670999 DOI: 10.1016/j.cub.2015.11.018] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 10/05/2015] [Accepted: 11/05/2015] [Indexed: 12/28/2022]
Abstract
Visual motion cues are used by many animals to guide navigation across a wide range of environments. Long-standing theoretical models have made predictions about the computations that compare light signals across space and time to detect motion. Using connectomic and physiological approaches, candidate circuits that can implement various algorithmic steps have been proposed in the Drosophila visual system. These pathways connect photoreceptors, via interneurons in the lamina and the medulla, to direction-selective cells in the lobula and lobula plate. However, the functional architecture of these circuits remains incompletely understood. Here, we use a forward genetic approach to identify the medulla neuron Tm9 as critical for motion-evoked behavioral responses. Using in vivo calcium imaging combined with genetic silencing, we place Tm9 within motion-detecting circuitry. Tm9 receives functional inputs from the lamina neurons L3 and, unexpectedly, L1 and passes information onto the direction-selective T5 neuron. Whereas the morphology of Tm9 suggested that this cell would inform circuits about local points in space, we found that the Tm9 spatial receptive field is large. Thus, this circuit informs elementary motion detectors about a wide region of the visual scene. In addition, Tm9 exhibits sustained responses that provide a tonic signal about incoming light patterns. Silencing Tm9 dramatically reduces the response amplitude of T5 neurons under a broad range of different motion conditions. Thus, our data demonstrate that sustained and wide-field signals are essential for elementary motion processing.
Collapse
|
28
|
Hollmann V, Lucks V, Kurtz R, Engelmann J. Adaptation-induced modification of motion selectivity tuning in visual tectal neurons of adult zebrafish. J Neurophysiol 2015; 114:2893-902. [PMID: 26378206 DOI: 10.1152/jn.00568.2015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Accepted: 09/15/2015] [Indexed: 11/22/2022] Open
Abstract
In the developing brain, training-induced emergence of direction selectivity and plasticity of orientation tuning appear to be widespread phenomena. These are found in the visual pathway across different classes of vertebrates. Moreover, short-term plasticity of orientation tuning in the adult brain has been demonstrated in several species of mammals. However, it is unclear whether neuronal orientation and direction selectivity in nonmammalian species remains modifiable through short-term plasticity in the fully developed brain. To address this question, we analyzed motion tuning of neurons in the optic tectum of adult zebrafish by calcium imaging. In total, orientation and direction selectivity was enhanced by adaptation, responses of previously orientation-selective neurons were sharpened, and even adaptation-induced emergence of selectivity in previously nonselective neurons was observed in some cases. The different observed effects are mainly based on the relative distance between the previously preferred and the adaptation direction. In those neurons in which a shift of the preferred orientation or direction was induced by adaptation, repulsive shifts (i.e., away from the adapter) were more prevalent than attractive shifts. A further novel finding for visually induced adaptation that emerged from our study was that repulsive and attractive shifts can occur within one brain area, even with uniform stimuli. The type of shift being induced also depends on the difference between the adapting and the initially preferred stimulus direction. Our data indicate that, even within the fully developed optic tectum, short-term plasticity might have an important role in adjusting neuronal tuning functions to current stimulus conditions.
Collapse
Affiliation(s)
- Vanessa Hollmann
- Active Sensing and Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany; and
| | - Valerie Lucks
- Active Sensing and Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany; and
| | - Rafael Kurtz
- Department of Neurobiology, Bielefeld University, Bielefeld, Germany
| | - Jacob Engelmann
- Active Sensing and Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany; and
| |
Collapse
|
29
|
Glaze CM, Kable JW, Gold JI. Normative evidence accumulation in unpredictable environments. eLife 2015; 4. [PMID: 26322383 PMCID: PMC4584511 DOI: 10.7554/elife.08825] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Accepted: 08/30/2015] [Indexed: 11/22/2022] Open
Abstract
In our dynamic world, decisions about noisy stimuli can require temporal accumulation of evidence to identify steady signals, differentiation to detect unpredictable changes in those signals, or both. Normative models can account for learning in these environments but have not yet been applied to faster decision processes. We present a novel, normative formulation of adaptive learning models that forms decisions by acting as a leaky accumulator with non-absorbing bounds. These dynamics, derived for both discrete and continuous cases, depend on the expected rate of change of the statistics of the evidence and balance signal identification and change detection. We found that, for two different tasks, human subjects learned these expectations, albeit imperfectly, then used them to make decisions in accordance with the normative model. The results represent a unified, empirically supported account of decision-making in unpredictable environments that provides new insights into the expectation-driven dynamics of the underlying neural signals. DOI:http://dx.doi.org/10.7554/eLife.08825.001 Organisms gather information from their surroundings to make decisions. Traditionally, neuroscientists have investigated decision-making by first asking what would be optimal for the animal, and then seeing whether and how the brain implements the optimal process. This approach has assumed that the environment consists of noisy, but stable, signals that the brain must decipher by accumulating information over time and ‘averaging out’ the noise. Previous research had suggested that most animals can accumulate information. However, these studies also showed that animals, including humans, often fall short of the optimal solution by being overly sensitive to noise and failing to completely average it out. Of course, in real life, the signals themselves can change abruptly and unpredictably, challenging us to distinguish noise from changes in the underlying signals. If a moving target suddenly jolts to the right, is that change part of the normal jitter that should be ignored, or does it predict where the target will be next? How do we know when to keep old information that is still relevant to the decision, and when to discard the old information because a change might have occurred that renders it irrelevant? Glaze et al. have addressed this question by building optimal change detection into the traditional ‘information-accumulation’ framework. The model suggests that what researchers previously thought was an over-sensitivity to noise might actually be optimal for the real-life challenge of detecting change. In two different tasks, Glaze et al. tested human volunteers to see if they could make decisions in ways predicted by the model. One task involved the volunteers making decisions about which one of two possible sources of noisy signals generated a given piece of information, with the correct answer changing unpredictably every 1–20 trials. The other task involved looking at a crowd of moving dots, which jolted and wobbled as they changed direction, and the volunteers had to decide which direction the dots were moving at the end of each trial. Both experiments showed that the volunteers were remarkably good at making decisions in the ways predicted by the new model, and incorporated learned expectations about the rate of change in underlying signals. The results suggest that humans, and potentially other organisms, are capable of detecting changes in the optimal ways suggested by the decision-making model. The study also makes predictions about what kinds of neural patterns neuroscientists might find when measuring brain activity while organisms do similar tasks. DOI:http://dx.doi.org/10.7554/eLife.08825.002
Collapse
Affiliation(s)
- Christopher M Glaze
- Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| | - Joseph W Kable
- Department of Psychology, University of Pennsylvania, Philadelphia, United States
| | - Joshua I Gold
- Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
30
|
Hietanen MA. The relative contributions of global and local acceleration components on speed perception and discriminability following adaptation. Vision Res 2015; 115:135-41. [PMID: 26278165 DOI: 10.1016/j.visres.2015.06.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Revised: 06/12/2015] [Accepted: 06/23/2015] [Indexed: 11/13/2022]
Abstract
The perception of speed is dependent on the history of previously presented speeds. Adaptation to a given speed regularly results in a reduction of perceived speed and an increase in speed discriminability and in certain circumstances can result in an increase in perceived speed. In order to determine the relative contributions of the local and global speed components on perceived speed, this experiment used expanding dot flow fields with accelerating (global), decelerating (global) and mixed accelerating/decelerating (local) speed patterns. Profound decreases in perceived speed are found when viewing low test speeds after adaptation to high speeds. Small increases in the perceived speed of high test speeds occur following adaptation to low speeds. There were small but significant differences in perceived stimulus speed after adaptation due to different acceleration profiles. No evidence for global modulation of speed discriminability following adaptation was found.
Collapse
Affiliation(s)
- Markus A Hietanen
- National Vision Research Institute, Australian College of Optometry, Cnr Cardigan and Keppel Street, Carlton, VIC 3053, Australia; ARC Centre of Excellence for Integrative Brain Function and Department of Optometry and Vision Sciences, University of Melbourne, Parkville, VIC 3010, Australia.
| |
Collapse
|
31
|
Todd NPM, Lee CS. Source analysis of electrophysiological correlates of beat induction as sensory-guided action. Front Psychol 2015; 6:1178. [PMID: 26321991 PMCID: PMC4536380 DOI: 10.3389/fpsyg.2015.01178] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Accepted: 07/27/2015] [Indexed: 11/13/2022] Open
Abstract
In this paper we present a reanalysis of electrophysiological data originally collected to test a sensory-motor theory of beat induction (Todd et al., 2002; Todd and Seiss, 2004; Todd and Lee, 2015). The reanalysis is conducted in the light of more recent findings and in particular the demonstration that auditory evoked potentials contain a vestibular dependency. At the core of the analysis is a model which predicts brain dipole source current activity over time in temporal and frontal lobe areas during passive listening to a rhythm, or active synchronization, where it dissociates the frontal activity into distinct sources which can be identified as respectively pre-motor and motor in origin. The model successfully captures the main features of the rhythm in showing that the metrical structure is manifest in an increase in source current activity during strong compared to weak beats. In addition the outcomes of modeling suggest that: (1) activity in both temporal and frontal areas contribute to the metrical percept and that this activity is distributed over time; (2) transient, time-locked activity associated with anticipated beats is increased when a temporal expectation is confirmed following a previous violation, such as a syncopation; (3) two distinct processes are involved in auditory cortex, corresponding to tangential and radial (possibly vestibular dependent) current sources. We discuss the implications of these outcomes for the insights they give into the origin of metrical structure and the power of syncopation to induce movement and create a sense of groove.
Collapse
Affiliation(s)
- Neil P. M. Todd
- Faculty of Life Science, University of ManchesterManchester, UK
| | | |
Collapse
|
32
|
Abstract
Three decades ago, Rockel et al. proposed that neuronal surface densities (number of neurons under a square millimeter of surface) of primary visual cortices (V1s) in primates is 2.5 times higher than the neuronal density of V1s in nonprimates or many other cortical regions in primates and nonprimates. This claim has remained controversial and much debated. We replicated the study of Rockel et al. with attention to modern stereological precepts and show that indeed primate V1 is 2.5 times denser (number of neurons per square millimeter) than many other cortical regions and nonprimate V1s; we also show that V2 is 1.7 times as dense. As primate V1s are denser, they have more neurons and thus more pinwheels than similar-sized nonprimate V1s, which explains why primates have better visual acuity.
Collapse
|
33
|
Nityananda V, Tarawneh G, Jones L, Busby N, Herbert W, Davies R, Read JCA. The contrast sensitivity function of the praying mantis Sphodromantis lineola. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2015; 201:741-50. [PMID: 25894490 PMCID: PMC4510923 DOI: 10.1007/s00359-015-1008-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 03/25/2015] [Accepted: 03/30/2015] [Indexed: 12/02/2022]
Abstract
The detection of visual motion and its direction is a fundamental task faced by several visual systems. The motion detection system of insects has been widely studied with the majority of studies focussing on flies and bees. Here we characterize the contrast sensitivity of motion detection in the praying mantis Sphodromantis lineola, an ambush predator that stays stationary for long periods of time while preying on fast-moving prey. In this, its visual behaviour differs from previously studied insects and we might therefore expect its motion detection system to differ from theirs. To investigate the sensitivity of the mantis we analyzed its optomotor response in response to drifting gratings with different contrasts and spatio-temporal frequencies. We find that the contrast sensitivity of the mantis depends on the spatial and temporal frequencies present in the stimulus and is separably tuned to spatial and temporal frequency rather than specifically to object velocity. Our results also suggest that mantises are sensitive to a broad range of velocities, in which they differ from bees and are more similar to hoverflies. We discuss our results in relation to the contrast sensitivities of other insects and the visual ecology of the mantis.
Collapse
Affiliation(s)
- Vivek Nityananda
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, UK,
| | | | | | | | | | | | | |
Collapse
|
34
|
|
35
|
Kido K, Makioka S. Serial order learning of subliminal visual stimuli: evidence of multistage learning. Front Psychol 2015; 6:76. [PMID: 25762947 PMCID: PMC4329799 DOI: 10.3389/fpsyg.2015.00076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 01/14/2015] [Indexed: 12/04/2022] Open
Abstract
It is widely known that statistical learning of visual symbol sequences occurs implicitly (Kim et al., 2009). In this study, we examined whether people can learn the serial order of visual symbols when they cannot detect them. During the familiarization phase, triplets or quadruplets of novel symbols were presented to one eye under continuous flash suppression (CFS). Perception of the symbols was completely suppressed by the flash patterns presented to the other eye [binocular rivalry (BR)]. During the test phase, the detection latency was faster for symbols located later in the triplets or quadruplets. These results indicate that serial order learning occurs even when the participants cannot detect the stimuli. We also found that detection became slower for the last item of the triplets or quadruplets. This phenomenon occurred only when the participants were familiarized with the symbols under CFS, suggesting that the subsequent symbols interfered with the processing of the target symbol when conscious perception was suppressed. We further examined the nature of the interference and found that it occurred only when the subsequent symbol was not fixed. This result suggests that serial order learning under BR is restricted to fixed order sequences. Statistical learning of the symbols’ transition probability might not occur when the participants cannot detect the symbols. We confirmed this hypothesis by conducting another experiment wherein the transition probability of the symbol sequence was manipulated.
Collapse
Affiliation(s)
- Kaede Kido
- Information Processing Center, Osaka Kyoiku University Kashihara, Japan
| | - Shogo Makioka
- Department of Human Sciences, Osaka Prefecture University Sakai, Japan
| |
Collapse
|
36
|
Multisensory Perception: Pinpointing Visual Enhancement by Appropriate Odors. Curr Biol 2015; 25:R196-8. [DOI: 10.1016/j.cub.2015.01.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
37
|
Affiliation(s)
- Nathan S. HART
- School of Animal Biology and the Oceans Institute; The University of Western Australia; Crawley Perth Australia
| | - Shaun P. COLLIN
- School of Animal Biology and the Oceans Institute; The University of Western Australia; Crawley Perth Australia
| |
Collapse
|
38
|
Egelhaaf M, Kern R, Lindemann JP. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains. Front Neural Circuits 2014; 8:127. [PMID: 25389392 PMCID: PMC4211400 DOI: 10.3389/fncir.2014.00127] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Accepted: 10/05/2014] [Indexed: 11/13/2022] Open
Abstract
Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around ("optic flow") to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and-in many behavioral contexts-less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld UniversityBielefeld, Germany
| | - Roland Kern
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld UniversityBielefeld, Germany
| | - Jens Peter Lindemann
- Department of Neurobiology and Center of Excellence “Cognitive Interaction Technology” (CITEC), Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
39
|
Abstract
Motion detection is a fundamental property of the visual system. The gold standard for studying and understanding this function is the motion energy model. This computational tool relies on spatiotemporally selective filters that capture the change in spatial position over time afforded by moving objects. Although the filters are defined in space-time, their human counterparts have never been studied in their native spatiotemporal space but rather in the corresponding frequency domain. When this frequency description is back-projected to spatiotemporal description, not all characteristics of the underlying process are retained, leaving open the possibility that important properties of human motion detection may have remained unexplored. We derived descriptors of motion detectors in native space-time, and discovered a large unexpected dynamic structure involving a >2× change in detector amplitude over the first ∼100 ms. This property is not predicted by the energy model, generalizes across the visual field, and is robust to adaptation; however, it is silenced by surround inhibition and is contrast dependent. We account for all results by extending the motion energy model to incorporate a small network that supports feedforward spread of activation along the motion trajectory via a simple gain-control circuit.
Collapse
|
40
|
Brooks DS. The role of models in the process of epistemic integration: the case of the Reichardt motion detector. HISTORY AND PHILOSOPHY OF THE LIFE SCIENCES 2014; 36:90-113. [PMID: 25515265 DOI: 10.1007/s40656-014-0006-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2013] [Accepted: 03/16/2014] [Indexed: 06/04/2023]
Abstract
Recent work on epistemic integration in the life sciences has emphasized the importance of integration in thinking about explanatory practice in science, particularly for articulating a robust alternative to reductionism and anti-reductionism. This paper analyzes the role of models in balancing the relative contributions of lower- and higher-level epistemic resources involved in this process. Integration between multiple disciplines proceeds by constructing a problem agenda (Love, Philos Sci 75(5): 874-886, 2008), a set of interrelated problems that structures the problem space of a complex phenomenon that is investigated by many disciplines. The usage of models, it is argued, mark changes in a phenomenon's problem agenda depending on the task that is expected of it. Particularly, it emphasizes the sensitivity of a problem agenda to changing attitudes in the solutions to the conceptual and empirical items constituting that agenda. The analysis will proceed by means of a case study, the Reichardt motion detector, a model that has been vital to the methodological and conceptual development of research on motion detection, especially in invertebrates. As will be seen, the history of the Reichardt model will exemplify the dynamic changes that occur in the interdisciplinary negotiations that comprise the active efforts of various sciences working to integrate their resources.
Collapse
Affiliation(s)
- Daniel S Brooks
- Fakultät für Geschichtswissenschaft, Philosophie und Theologie, Abteilung Philosophie, Universität Bielefeld, Bielefeld, Germany,
| |
Collapse
|
41
|
Abstract
Visual motion cues provide animals with critical information about their environment and guide a diverse array of behaviors. The neural circuits that carry out motion estimation provide a well-constrained model system for studying the logic of neural computation. Through a confluence of behavioral, physiological, and anatomical experiments, taking advantage of the powerful genetic tools available in the fruit fly Drosophila melanogaster, an outline of the neural pathways that compute visual motion has emerged. Here we describe these pathways, the evidence supporting them, and the challenges that remain in understanding the circuits and computations that link sensory inputs to behavior. Studies in flies and vertebrates have revealed a number of functional similarities between motion-processing pathways in different animals, despite profound differences in circuit anatomy and structure. The fact that different circuit mechanisms are used to achieve convergent computational outcomes sheds light on the evolution of the nervous system.
Collapse
Affiliation(s)
- Marion Silies
- Department of Neurobiology, Stanford University, Stanford, California 94305; , ,
| | | | | |
Collapse
|
42
|
Giessel AJ, Datta SR. Olfactory maps, circuits and computations. Curr Opin Neurobiol 2013; 24:120-32. [PMID: 24492088 DOI: 10.1016/j.conb.2013.09.010] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2013] [Revised: 09/06/2013] [Accepted: 09/20/2013] [Indexed: 11/17/2022]
Abstract
Sensory information in the visual, auditory and somatosensory systems is organized topographically, with key sensory features ordered in space across neural sheets. Despite the existence of a spatially stereotyped map of odor identity within the olfactory bulb, it is unclear whether the higher olfactory cortex uses topography to organize information about smells. Here, we review recent work on the anatomy, microcircuitry and neuromodulation of two higher-order olfactory areas: the piriform cortex and the olfactory tubercle. The piriform is an archicortical region with an extensive local associational network that constructs representations of odor identity. The olfactory tubercle is an extension of the ventral striatum that may use reward-based learning rules to encode odor valence. We argue that in contrast to brain circuits for other sensory modalities, both the piriform and the olfactory tubercle largely discard any topography present in the bulb and instead use distributive afferent connectivity, local learning rules and input from neuromodulatory centers to build behaviorally relevant representations of olfactory stimuli.
Collapse
Affiliation(s)
- Andrew J Giessel
- Harvard Medical School, Department of Neurobiology, 220 Longwood Avenue, Boston, MA 02115, United States
| | - Sandeep Robert Datta
- Harvard Medical School, Department of Neurobiology, 220 Longwood Avenue, Boston, MA 02115, United States.
| |
Collapse
|
43
|
Arshad Q, Nigmatullina Y, Bronstein AM. Unidirectional visual motion adaptation induces reciprocal inhibition of human early visual cortex excitability. Clin Neurophysiol 2013; 125:798-804. [PMID: 24120313 DOI: 10.1016/j.clinph.2013.09.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2012] [Revised: 05/17/2013] [Accepted: 09/16/2013] [Indexed: 10/26/2022]
Abstract
OBJECTIVES Behavioural observations provided by the waterfall illusion suggest that motion perception is mediated by a comparison of responsiveness of directional selective neurones. These are proposed to be optimally tuned for motion detection in different directions. Critically however, despite the behavioural observations, direct evidence of this relationship at a cortical level in humans is lacking. By utilising the state dependant properties of transcranial magnetic stimulation (TMS), one can probe the excitability of specific neuronal populations using the perceptual phenomenon of phosphenes. METHOD We exposed subjects to unidirectional visual motion adaptation and subsequently simultaneously measured early visual cortex (V1) excitability whilst viewing motion in the adapted and non-adapted direction. RESULT Following adaptation, the probability of perceiving a phosphene whilst viewing motion in the adapted direction was diminished reflecting a reduction in V1 excitability. Conversely, V1 excitability was enhanced whilst viewing motion in the opposite direction to that used for adaptation. CONCLUSION Our results provide support that in humans a process of reciprocal inhibition between oppositely tuned directionally selective neurones in V1 facilitates motion perception. SIGNIFICANCE This paradigm affords a unique opportunity to investigate changes in cortical excitability following peripheral vestibular disorders.
Collapse
Affiliation(s)
- Q Arshad
- Academic Department of Neuro-Otology, Imperial College London, Charing Cross Hospital Campus, Fulham Palace Road, London W6 8RF, United Kingdom
| | - Y Nigmatullina
- Academic Department of Neuro-Otology, Imperial College London, Charing Cross Hospital Campus, Fulham Palace Road, London W6 8RF, United Kingdom
| | - A M Bronstein
- Academic Department of Neuro-Otology, Imperial College London, Charing Cross Hospital Campus, Fulham Palace Road, London W6 8RF, United Kingdom.
| |
Collapse
|
44
|
Affiliation(s)
- Karin Nordström
- Department of Neuroscience, Uppsala University, SE-751 24 Uppsala, Sweden.
| |
Collapse
|
45
|
Egelhaaf M, Boeddeker N, Kern R, Kurtz R, Lindemann JP. Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action. Front Neural Circuits 2012; 6:108. [PMID: 23269913 PMCID: PMC3526811 DOI: 10.3389/fncir.2012.00108] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Accepted: 12/03/2012] [Indexed: 11/30/2022] Open
Abstract
Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes ("optic flow"). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action-perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Neurobiology and Centre of Excellence “Cognitive Interaction Technology”Bielefeld University, Germany
| | | | | | | | | |
Collapse
|
46
|
Distinct functional organizations for processing different motion signals in V1, V2, and V4 of macaque. J Neurosci 2012; 32:13363-79. [PMID: 23015427 DOI: 10.1523/jneurosci.1900-12.2012] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Motion perception is qualitatively invariant across different objects and forms, namely, the same motion information can be conveyed by many different physical carriers, and it requires the processing of motion signals consisting of direction, speed, and axis or trajectory of motion defined by a moving object. Compared with the representation of orientation, the cortical processing of these different motion signals within the early ventral visual pathway of the primate remains poorly understood. Using drifting full-field noise stimuli and intrinsic optical imaging, along with cytochrome-oxidase staining, we found that the orientation domains in macaque V1, V2, and V4 that processed orientation signals also served to process motion signals associated with the axis and speed of motion. In contrast, direction domains within the thick stripes of V2 demonstrated preferences that were independent of motion speed. The population responses encoding the orientation and motion axis could be precisely reproduced by a spatiotemporal energy model. Thus, our observation of orientation domains with dual functions in V1, V2, and V4 directly support the notion that the linear representation of the temporal series of retinotopic activations may serve as another motion processing strategy in primate ventral visual pathway, contributing directly to fine form and motion analysis. Our findings further reveal that different types of motion information are differentially processed in parallel and segregated compartments within primate early visual cortices, before these motion features are fully combined in high-tier visual areas.
Collapse
|
47
|
Warzecha AK, Rosner R, Grewe J. Impact and sources of neuronal variability in the fly's motion vision pathway. ACTA ACUST UNITED AC 2012. [PMID: 23178476 DOI: 10.1016/j.jphysparis.2012.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Nervous systems encode information about dynamically changing sensory input by changes in neuronal activity. Neuronal activity changes, however, also arise from noise sources within and outside the nervous system or from changes of the animal's behavioral state. The resulting variability of neuronal responses in representing sensory stimuli limits the reliability with which animals can respond to stimuli and may thus even affect the chances for survival in certain situations. Relevant sources of noise arising at different stages along the motion vision pathway have been investigated from the sensory input to the initiation of behavioral reactions. Here, we concentrate on the reliability of processing visual motion information in flies. Flies rely on visual motion information to guide their locomotion. They are among the best established model systems for the processing of visual motion information allowing us to bridge the gap between behavioral performance and underlying neuronal computations. It has been possible to directly assess the consequences of noise at major stages of the fly's visual motion processing system on the reliability of neuronal signals. Responses of motion sensitive neurons and their variability have been related to optomotor movements as indicators for the overall performance of visual motion computation. We address whether and how noise already inherent in the stimulus, e.g. photon noise for the visual system, influences later processing stages and to what extent variability at the output level of the sensory system limits behavioral performance. Recent advances in circuit analysis and the progress in monitoring neuronal activity in behaving animals should now be applied to understand how the animal meets the requirements of fast and reliable manoeuvres in naturalistic situations.
Collapse
Affiliation(s)
| | - Ronny Rosner
- Tierphysiologie, Philipps-Universität Marburg, 35032 Marburg, Germany
| | - Jan Grewe
- Dept. Biology II, Ludwig-Maximilians Univ., 82152 Martinsried, Germany
| |
Collapse
|
48
|
Audio-visual localization with hierarchical topographic maps: Modeling the superior colliculus. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2012.05.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
49
|
Traschütz A, Zinke W, Wegener D. Speed change detection in foveal and peripheral vision. Vision Res 2012; 72:1-13. [DOI: 10.1016/j.visres.2012.08.019] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 08/06/2012] [Accepted: 08/31/2012] [Indexed: 10/27/2022]
|
50
|
O'Carroll DC, Barnett PD, Nordström K. Temporal and spatial adaptation of transient responses to local features. Front Neural Circuits 2012; 6:74. [PMID: 23087617 PMCID: PMC3474938 DOI: 10.3389/fncir.2012.00074] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Accepted: 10/01/2012] [Indexed: 11/15/2022] Open
Abstract
Interpreting visual motion within the natural environment is a challenging task, particularly considering that natural scenes vary enormously in brightness, contrast and spatial structure. The performance of current models for the detection of self-generated optic flow depends critically on these very parameters, but despite this, animals manage to successfully navigate within a broad range of scenes. Within global scenes local areas with more salient features are common. Recent work has highlighted the influence that local, salient features have on the encoding of optic flow, but it has been difficult to quantify how local transient responses affect responses to subsequent features and thus contribute to the global neural response. To investigate this in more detail we used experimenter-designed stimuli and recorded intracellularly from motion-sensitive neurons. We limited the stimulus to a small vertically elongated strip, to investigate local and global neural responses to pairs of local “doublet” features that were designed to interact with each other in the temporal and spatial domain. We show that the passage of a high-contrast doublet feature produces a complex transient response from local motion detectors consistent with predictions of a simple computational model. In the neuron, the passage of a high-contrast feature induces a local reduction in responses to subsequent low-contrast features. However, this neural contrast gain reduction appears to be recruited only when features stretch vertically (i.e., orthogonal to the direction of motion) across at least several aligned neighboring ommatidia. Horizontal displacement of the components of elongated features abolishes the local adaptation effect. It is thus likely that features in natural scenes with vertically aligned edges, such as tree trunks, recruit the greatest amount of response suppression. This property could emphasize the local responses to such features vs. those in nearby texture within the scene.
Collapse
Affiliation(s)
- David C O'Carroll
- Adelaide Centre for Neuroscience Research, School of Medical Sciences, The University of Adelaide Adelaide, SA, Australia
| | | | | |
Collapse
|