1
|
Hasegawa H, Sakamoto K, Shomura K, Sano Y, Kasai K, Tanaka S, Okada-Shudo Y, Otomo A. Biomaterial-Based Biomimetic Visual Sensors: Inkjet Patterning of Bacteriorhodopsin. ACS APPLIED MATERIALS & INTERFACES 2023; 15:45137-45145. [PMID: 37702224 DOI: 10.1021/acsami.3c07540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2023]
Abstract
Biomimetic visual sensors utilizing bacteriorhodopsin (bR) were fabricated by using an inkjet method. The inkjet printer facilitated the jetting of the bR suspension, allowing for the deposition of bR films. The resulting inkjet-printed bR film exhibited time-differential photocurrent response characteristics similar to those of a dip-coated bR film. By adjusting the number of printed bR film layers, the intensity of the photocurrent could be easily controlled. Moreover, the inkjet printing technique enabled unconstrained patterning, facilitating the design of various visual information processing functions, such as visual filters. In this study, we successfully fabricated two visual filters, namely, a two-dimensional Difference of Gaussian (DOG) filter and a Gabor filter. The printed DOG filter demonstrated edge detection capabilities corresponding to contour recognition in visual receptive fields. On the other hand, the printed Gabor filter proved effective in detecting objects of specific sizes as well as their motion and orientation. The integration of bR and the inkjet method holds significant potential for the widespread implementation of highly functional biomaterial-based visual sensors. These sensors have the capability to provide real-time visual information while operating in an energy-efficient manner.
Collapse
Affiliation(s)
- Hiroyuki Hasegawa
- Faculty of Education, Shimane University, Matsue, Shimane 690-8504, Japan
- Graduate School of Natural Science and Technology, Shimane University, Matsue, Shimane 690-8504, Japan
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| | - Kairi Sakamoto
- Graduate School of Natural Science and Technology, Shimane University, Matsue, Shimane 690-8504, Japan
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| | - Kazuya Shomura
- Faculty of Education, Shimane University, Matsue, Shimane 690-8504, Japan
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| | - Yuka Sano
- Faculty of Education, Shimane University, Matsue, Shimane 690-8504, Japan
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| | - Katsuyuki Kasai
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| | - Shukichi Tanaka
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| | - Yoshiko Okada-Shudo
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Tokyo 182-8585, Japan
| | - Akira Otomo
- Advanced ICT Research Institute, National Institute of Information and Communications Technology, Kobe 651-2492, Japan
| |
Collapse
|
2
|
Stacy AK, Schneider NA, Gilman NK, Van Hooser SD. Impact of Acute Visual Experience on Development of LGN Receptive Fields in the Ferret. J Neurosci 2023; 43:3495-3508. [PMID: 37028934 PMCID: PMC10184738 DOI: 10.1523/jneurosci.1461-21.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 03/23/2023] [Accepted: 03/27/2023] [Indexed: 04/09/2023] Open
Abstract
Selectivity for direction of motion is a key feature of primary visual cortical neurons. Visual experience is required for direction selectivity in carnivore and primate visual cortex, but the circuit mechanisms of its formation remain incompletely understood. Here, we examined how developing lateral geniculate nucleus (LGN) neurons may contribute to cortical direction selectivity. Using in vivo electrophysiology techniques, we examined LGN receptive field properties of visually naive female ferrets before and after exposure to 6 h of motion stimuli to assess the effect of acute visual experience on LGN cell development. We found that acute experience with motion stimuli did not significantly affect the weak orientation or direction selectivity of LGN neurons. In addition, we found that neither latency nor sustainedness or transience of LGN neurons significantly changed with acute experience. These results suggest that the direction selectivity that emerges in cortex after acute experience is computed in cortex and cannot be explained by changes in LGN cells.SIGNIFICANCE STATEMENT The development of typical neural circuitry requires experience-independent and experience-dependent factors. In the visual cortex of carnivores and primates, selectivity for motion arises as a result of experience, but we do not understand whether the major brain area that sits between the retina and the visual cortex-the lateral geniculate nucleus of the thalamus-also participates. Here, we found that lateral geniculate neurons do not exhibit changes as a result of several hours of visual experience with moving stimuli at a time when visual cortical neurons undergo a rapid change. We conclude that lateral geniculate neurons do not participate in this plasticity and that changes in cortex are likely responsible for the development of direction selectivity in carnivores and primates.
Collapse
Affiliation(s)
- Andrea K Stacy
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
| | - Nathan A Schneider
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
| | - Noah K Gilman
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
| | - Stephen D Van Hooser
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454
- Sloan-Swartz Center for Theoretical Neurobiology, Brandeis University, Waltham, Massachusetts 02454
| |
Collapse
|
3
|
Chariker L, Shapley R, Hawken M, Young LS. A Computational Model of Direction Selectivity in Macaque V1 Cortex Based on Dynamic Differences between On and Off Pathways. J Neurosci 2022; 42:3365-3380. [PMID: 35241489 PMCID: PMC9034785 DOI: 10.1523/jneurosci.2145-21.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 01/25/2022] [Accepted: 02/21/2022] [Indexed: 11/21/2022] Open
Abstract
This paper is about neural mechanisms of direction selectivity (DS) in macaque primary visual cortex, V1. We present data (on male macaque) showing strong DS in a majority of simple cells in V1 layer 4Cα, the cortical layer that receives direct afferent input from the magnocellular division of the lateral geniculate nucleus (LGN). Magnocellular LGN cells are not direction-selective. To understand the mechanisms of DS, we built a large-scale, recurrent model of spiking neurons called DSV1. Like its predecessors, DSV1 reproduces many visual response properties of V1 cells including orientation selectivity. Two important new features of DSV1 are (1) DS is initiated by small, consistent dynamic differences in the visual responses of OFF and ON Magnocellular LGN cells, and (2) DS in the responses of most model simple cells is increased over those of their feedforward inputs; this increase is achieved through dynamic interaction of feedforward and intracortical synaptic currents without the use of intracortical direction-specific connections. The DSV1 model emulates experimental data in the following ways: (1) most 4Cα Simple cells were highly direction-selective but 4Cα Complex cells were not; (2) the preferred directions of the model's direction-selective Simple cells were invariant with spatial and temporal frequency (TF); (3) the distribution of the preferred/opposite ratio across the model's population of cells was very close to that found in experiments. The strong quantitative agreement between DS in data and in model simulations suggests that the neural mechanisms of DS in DSV1 may be similar to those in the real visual cortex.SIGNIFICANCE STATEMENT Motion perception is a vital part of our visual experience of the world. In monkeys, whose vision resembles that of humans, the neural computation of the direction of a moving target starts in the primary visual cortex, V1, in layer 4Cα that receives input from the eye through the lateral geniculate nucleus (LGN). How direction selectivity (DS) is generated in layer 4Cα is an outstanding unsolved problem in theoretical neuroscience. In this paper, we offer a solution based on plausible biological mechanisms. We present a new large-scale circuit model in which DS originates from slightly different LGN ON/OFF response time-courses and is enhanced in cortex without the need for direction-specific intracortical connections. The model's DS is in quantitative agreement with experiments.
Collapse
Affiliation(s)
- Logan Chariker
- School of Natural Sciences, Institute for Advanced Study, Princeton, New Jersey 08540
| | - Robert Shapley
- Center for Neural Science, New York University, New York, New York 10003
- Courant Institute of Mathematical Sciences, New York University, New York, New York 10012
| | - Michael Hawken
- Center for Neural Science, New York University, New York, New York 10003
| | - Lai-Sang Young
- School of Natural Sciences, Institute for Advanced Study, Princeton, New Jersey 08540
- Courant Institute of Mathematical Sciences, New York University, New York, New York 10012
- School of Mathematics, Institute for Advanced Study, Princeton, New Jersey 08540
| |
Collapse
|
4
|
Liu X, Robinson PA. Analytic Model for Feature Maps in the Primary Visual Cortex. Front Comput Neurosci 2022; 16:659316. [PMID: 35185503 PMCID: PMC8854373 DOI: 10.3389/fncom.2022.659316] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 01/05/2022] [Indexed: 11/29/2022] Open
Abstract
A compact analytic model is proposed to describe the combined orientation preference (OP) and ocular dominance (OD) features of simple cells and their mutual constraints on the spatial layout of the combined OP-OD map in the primary visual cortex (V1). This model consists of three parts: (i) an anisotropic Laplacian (AL) operator that represents the local neural sensitivity to the orientation of visual inputs; and (ii) obtain a receptive field (RF) operator that models the anisotropic spatial projection from nearby neurons to a given V1 cell over scales of a few tenths of a millimeter and combines with the AL operator to give an overall OP operator; and (iii) a map that describes how the parameters of these operators vary approximately periodically across V1. The parameters of the proposed model maximize the neural response at a given OP with an OP tuning curve fitted to experimental results. It is found that the anisotropy of the AL operator does not significantly affect OP selectivity, which is dominated by the RF anisotropy, consistent with Hubel and Wiesel's original conclusions that orientation tuning width of V1 simple cell is inversely related to the elongation of its RF. A simplified and idealized OP-OD map is then constructed to describe the approximately periodic local OP-OD structure of V1 in a compact form. It is shown explicitly that the OP map can be approximated by retaining its dominant spatial Fourier coefficients, which are shown to suffice to reconstruct its basic spatial structure. Moreover, this representation is a suitable form to analyze observed OP maps compactly and to be used in neural field theory (NFT) for analyzing activity modulated by the OP-OD structure of V1. Application to independently simulated V1 OP structure shows that observed irregularities in the map correspond to a spread of dominant coefficients in a circle in Fourier space. In addition, there is a strong bias toward two perpendicular directions when only a small patch of local map is included. The bias is decreased as the amount of V1 included in the Fourier transform is increased.
Collapse
Affiliation(s)
- Xiaochen Liu
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- Center for Integrative Brain Function, The University of Sydney, Sydney, NSW, Australia
- *Correspondence: Xiaochen Liu
| | - Peter A. Robinson
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- Center for Integrative Brain Function, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
5
|
Stacy AK, Van Hooser SD. Development of Functional Properties in the Early Visual System: New Appreciations of the Roles of Lateral Geniculate Nucleus. Curr Top Behav Neurosci 2022; 53:3-35. [PMID: 35112333 DOI: 10.1007/7854_2021_297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In the years following Hubel and Wiesel's first reports on ocular dominance plasticity and amblyopia, much attention has been focused on understanding the role of cortical circuits in developmental and experience-dependent plasticity. Initial studies found few differences between retinal ganglion cells and neurons in the lateral geniculate nucleus and uncovered little evidence for an impact of altered visual experience on the functional properties of lateral geniculate nucleus neurons. In the last two decades, however, studies have revealed that the connectivity between the retina and lateral geniculate nucleus is much richer than was previously appreciated, even revealing visual plasticity - including ocular dominance plasticity - in lateral geniculate nucleus neurons. Here we review the development of the early visual system and the impact of experience with a distinct focus on recent discoveries about lateral geniculate nucleus, its connectivity, and evidence for its plasticity and rigidity during development.
Collapse
Affiliation(s)
- Andrea K Stacy
- Department of Biology, Brandeis University, Waltham, MA, USA
| | | |
Collapse
|
6
|
Abstract
This paper offers a theory for the origin of direction selectivity (DS) in the macaque primary visual cortex, V1. DS is essential for the perception of motion and control of pursuit eye movements. In the macaque visual pathway, neurons with DS first appear in V1, in the Simple cell population of the Magnocellular input layer 4Cα. The lateral geniculate nucleus (LGN) cells that project to these cortical neurons, however, are not direction selective. We hypothesize that DS is initiated in feed-forward LGN input, in the summed responses of LGN cells afferent to a cortical cell, and it is achieved through the interplay of 1) different visual response dynamics of ON and OFF LGN cells and 2) the wiring of ON and OFF LGN neurons to cortex. We identify specific temporal differences in the ON/OFF pathways that, together with item 2, produce distinct response time courses in separated subregions; analysis and simulations confirm the efficacy of the mechanisms proposed. To constrain the theory, we present data on Simple cells in layer 4Cα in response to drifting gratings. About half of the cells were found to have high DS, and the DS was broadband in spatial and temporal frequency (SF and TF). The proposed theory includes a complete analysis of how stimulus features such as SF and TF interact with ON/OFF dynamics and LGN-to-cortex wiring to determine the preferred direction and magnitude of DS.
Collapse
|
7
|
Gekas N, Mamassian P. Adaptation to one perceived motion direction can generate multiple velocity aftereffects. J Vis 2021; 21:17. [PMID: 34007990 PMCID: PMC8142737 DOI: 10.1167/jov.21.5.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sensory adaptation is a useful tool to identify the links between perceptual effects and neural mechanisms. Even though motion adaptation is one of the earliest and most documented aftereffects, few studies have investigated the perception of direction and speed of the aftereffect at the same time, that is the perceived velocity. Using a novel experimental paradigm, we simultaneously recorded the perceived direction and speed of leftward or rightward moving random dots before and after adaptation. For the adapting stimulus, we chose a horizontally-oriented broadband grating moving upward behind a circular aperture. Because of the aperture problem, the interpretation of this stimulus is ambiguous, being consistent with multiple velocities, and yet it is systematically perceived as moving at a single direction and speed. Here we ask whether the visual system adapts to the multiple velocities of the adaptor or to just the single perceived velocity. Our results show a strong repulsion aftereffect, away from the adapting velocity (downward and slower), that increases gradually for faster test stimuli as long as these stimuli include some velocities that match some of the ambiguous ones of the adaptor. In summary, the visual system seems to adapt to the multiple velocities of an ambiguous stimulus even though a single velocity is perceived. Our findings can be well described by a computational model that assumes a joint encoding of direction and speed and that includes an extended adaptation component that can represent all the possible velocities of the ambiguous stimulus.
Collapse
Affiliation(s)
- Nikos Gekas
- School of Psychology, University of Nottingham, Nottingham, UK.,Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France.,
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France.,
| |
Collapse
|
8
|
Learned optical flow for intra-operative tracking of the retinal fundus. Int J Comput Assist Radiol Surg 2020; 15:827-836. [PMID: 32323210 PMCID: PMC7261285 DOI: 10.1007/s11548-020-02160-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/03/2020] [Indexed: 11/17/2022]
Abstract
Purpose Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools. Methods As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases. Results The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos. Conclusions The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.
Collapse
|
9
|
Moscatelli A, Scaleia BL, Zago M, Lacquaniti F. Motion direction, luminance contrast, and speed perception: An unexpected meeting. J Vis 2020; 19:16. [PMID: 31206138 DOI: 10.1167/19.6.16] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Motion direction and luminance contrast are two central features in the representation of visual motion in humans. In five psychophysical experiments, we showed that these two features affect the perceived speed of a visual stimulus. Our data showed a surprising interaction between contrast and direction. Participants perceived downward moving stimuli as faster than upward or rightward stimuli, but only at high contrast. Likewise, luminance contrast produced an underestimation of motion speed, but mostly when the stimuli moved downward. We explained these novel phenomena by means of a theoretical model, accounting for prior knowledge of motion dynamics.
Collapse
Affiliation(s)
- Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Myrka Zago
- Department of Civil Engineering and Computer Science Engineering, Centre of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine and Centre of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| |
Collapse
|
10
|
Compound Stimuli Reveal the Structure of Visual Motion Selectivity in Macaque MT Neurons. eNeuro 2019; 6:ENEURO.0258-19.2019. [PMID: 31604815 PMCID: PMC6868477 DOI: 10.1523/eneuro.0258-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 08/15/2019] [Accepted: 08/22/2019] [Indexed: 11/26/2022] Open
Abstract
Motion selectivity in primary visual cortex (V1) is approximately separable in orientation, spatial frequency, and temporal frequency (“frequency-separable”). Models for area MT neurons posit that their selectivity arises by combining direction-selective V1 afferents whose tuning is organized around a tilted plane in the frequency domain, specifying a particular direction and speed (“velocity-separable”). This construction explains “pattern direction-selective” MT neurons, which are velocity-selective but relatively invariant to spatial structure, including spatial frequency, texture and shape. We designed a set of experiments to distinguish frequency-separable and velocity-separable models and executed them with single-unit recordings in macaque V1 and MT. Surprisingly, when tested with single drifting gratings, most MT neurons’ responses are fit equally well by models with either form of separability. However, responses to plaids (sums of two moving gratings) tend to be better described as velocity-separable, especially for pattern neurons. We conclude that direction selectivity in MT is primarily computed by summing V1 afferents, but pattern-invariant velocity tuning for complex stimuli may arise from local, recurrent interactions.
Collapse
|
11
|
Blair CD. The Wandering Circles: A Flicker Rate and Contour-Dependent Motion Illusion. Iperception 2019; 10:2041669519875156. [PMID: 31656578 PMCID: PMC6790949 DOI: 10.1177/2041669519875156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 08/15/2019] [Indexed: 11/16/2022] Open
Abstract
Understanding of the visual system can be informed by examining errors in perception. We present a novel illusion-Wandering Circles-in which stationary circles undergoing contrast-polarity reversals (i.e., flicker), when viewed peripherally, appear to move about in a random fashion. In two psychophysical experiments, participants rated the strength of perceived illusory motion under varying stimulus conditions. The illusory motion percept was strongest when the circle's edge was defined by a light/dark alternation and when the edge faded smoothly to the background gray (i.e., a circular arrangement of the Craik-O'Brien-Cornsweet illusion). In addition, the percept of illusory motion is flicker rate dependent, appearing strongest when the circles reversed polarity 9.44 times per second and weakest at 1.98 times per second. The Wandering Circles differ from many other classic motion illusions as the light/dark alternation is perfectly balanced in time and position around the edges of the circle, and thus, there is no net directional local or global motion energy in the stimulus. The perceived motion may instead rely on factors internal to the viewer such as top-down influences, asymmetries in luminance and motion perception across the retina, adaptation combined with positional uncertainty due to peripheral viewing, eye movements, or low contrast edges.
Collapse
|
12
|
Hughes AE, Greenwood JA, Finlayson NJ, Schwarzkopf DS. Population receptive field estimates for motion-defined stimuli. Neuroimage 2019; 199:245-260. [PMID: 31158480 PMCID: PMC6693563 DOI: 10.1016/j.neuroimage.2019.05.068] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 05/27/2019] [Indexed: 11/12/2022] Open
Abstract
The processing of motion changes throughout the visual hierarchy, from spatially restricted ‘local motion’ in early visual cortex to more complex large-field ‘global motion’ at later stages. Here we used functional magnetic resonance imaging (fMRI) to examine spatially selective responses in these areas related to the processing of random-dot stimuli defined by differences in motion. We used population receptive field (pRF) analyses to map retinotopic cortex using bar stimuli comprising coherently moving dots. In the first experiment, we used three separate background conditions: no background dots (dot-defined bar-only), dots moving coherently in the opposite direction to the bar (kinetic boundary) and dots moving incoherently in random directions (global motion). Clear retinotopic maps were obtained for the bar-only and kinetic-boundary conditions across visual areas V1–V3 and in higher dorsal areas. For the global-motion condition, retinotopic maps were much weaker in early areas and became clear only in higher areas, consistent with the emergence of global-motion processing throughout the visual hierarchy. However, in a second experiment we demonstrate that this pattern is not specific to motion-defined stimuli, with very similar results for a transparent-motion stimulus and a bar defined by a static low-level property (dot size) that should have driven responses particularly in V1. We further exclude explanations based on stimulus visibility by demonstrating that the observed differences in pRF properties do not follow the ability of observers to localise or attend to these bar elements. Rather, our findings indicate that dorsal extrastriate retinotopic maps may primarily be determined by the visibility of the neural responses to the bar relative to the background response (i.e. neural signal-to-noise ratios) and suggests that claims about stimulus selectivity from pRF experiments must be interpreted with caution.
Collapse
Affiliation(s)
- Anna E Hughes
- Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK.
| | - John A Greenwood
- Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| | - Nonie J Finlayson
- Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| | - D Samuel Schwarzkopf
- Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| |
Collapse
|
13
|
Going with the Flow: The Neural Mechanisms Underlying Illusions of Complex-Flow Motion. J Neurosci 2019; 39:2664-2685. [PMID: 30777886 DOI: 10.1523/jneurosci.2112-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Revised: 01/07/2019] [Accepted: 01/08/2019] [Indexed: 11/21/2022] Open
Abstract
Studying the mismatch between perception and reality helps us better understand the constructive nature of the visual brain. The Pinna-Brelstaff motion illusion is a compelling example illustrating how a complex moving pattern can generate an illusory motion perception. When an observer moves toward (expansion) or away (contraction) from the Pinna-Brelstaff figure, the figure appears to rotate. The neural mechanisms underlying the illusory complex-flow motion of rotation, expansion, and contraction remain unknown. We studied this question at both perceptual and neuronal levels in behaving male macaques by using carefully parametrized Pinna-Brelstaff figures that induce the above motion illusions. We first demonstrate that macaques perceive illusory motion in a manner similar to that of human observers. Neurophysiological recordings were subsequently performed in the middle temporal area (MT) and the dorsal portion of the medial superior temporal area (MSTd). We find that subgroups of MSTd neurons encoding a particular global pattern of real complex-flow motion (rotation, expansion, contraction) also represent illusory motion patterns of the same class. They require an extra 15 ms to reliably discriminate the illusion. In contrast, MT neurons encode both real and illusory local motions with similar temporal delays. These findings reveal that illusory complex-flow motion is first represented in MSTd by the same neurons that normally encode real complex-flow motion. However, the extraction of global illusory motion in MSTd from other classes of real complex-flow motion requires extra processing time. Our study illustrates a cascaded integration mechanism from MT to MSTd underlying the transformation from external physical to internal nonveridical flow-motion perception.SIGNIFICANCE STATEMENT The neural basis of the transformation from objective reality to illusory percepts of rotation, expansion, and contraction remains unknown. We demonstrate psychophysically that macaques perceive these illusory complex-flow motions in a manner similar to that of human observers. At the neural level, we show that medial superior temporal (MSTd) neurons represent illusory flow motions as if they were real by globally integrating middle temporal area (MT) local motion signals. Furthermore, while MT neurons reliably encode real and illusory local motions with similar temporal delays, MSTd neurons take a significantly longer time to process the signals associated with illusory percepts. Our work extends previous complex-flow motion studies by providing the first detailed analysis of the neuron-specific mechanisms underlying complex forms of illusory motion integration from MT to MSTd.
Collapse
|
14
|
Erlikhman G, Gutentag S, Blair CD, Caplovitz GP. Interactions of flicker and motion. Vision Res 2019; 155:24-34. [PMID: 30611695 PMCID: PMC6347541 DOI: 10.1016/j.visres.2018.12.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 12/10/2018] [Accepted: 12/18/2018] [Indexed: 11/17/2022]
Abstract
We present a series of novel observations about interactions between flicker and motion that lead to three distinct perceptual effects. We use the term flicker to describe alternating changes in a stimulus' luminance or color (i.e. a circle that flickers from black to white and visa-versa). When objects flicker, three distinct phenomena can be observed: (1) Flicker Induced Motion (FLIM) in which a single, stationary object, appears to move when it flickers at certain rates; (2) Flicker Induced Motion Suppression (FLIMS) in which a moving object appears to be stationary when it flickers at certain rates, and (3) Flicker-Induced Induced-Motion (FLIIM) in which moving objects that are flickering induce another flickering stationary object to appear to move. Across four psychophysical experiments, we characterize key stimulus parameters underlying these flicker-motion interactions. Interactions were strongest in the periphery and at flicker frequencies above 10 Hz. Induced motion occurred not just for luminance flicker, but for isoluminant color changes as well. We also found that the more physically moving objects there were, the more motion induction to stationary objects occurred. We present demonstrations that the effects reported here cannot be fully accounted for by eye movements: we show that the perceived motion of multiple stationary objects that are induced to move via flicker can appear to move independently and in random directions, whereas eye movements would have caused all of the objects to appear to move coherently. These effects highlight the fundamental role of spatiotemporal dynamics in the representation of motion and the intimate relationship between flicker and motion.
Collapse
Affiliation(s)
- Gennady Erlikhman
- Department of Psychology, University of Nevada, Reno, United States; Department of Psychology, University of California, Los Angeles, United States.
| | - Sion Gutentag
- Department of Psychology, University of Nevada, Reno, United States
| | | | | |
Collapse
|
15
|
A preference for minimal deformation constrains the perceived depth of a stereokinetic stimulus. Vision Res 2018; 153:53-59. [PMID: 30248368 DOI: 10.1016/j.visres.2018.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 08/31/2018] [Accepted: 09/01/2018] [Indexed: 11/23/2022]
Abstract
The current study examined whether the 'slow and smooth' hypothesis (Hildreth, 1984; Yuille & Grzywacz, 1989; Weiss, Simoncelli, & Adelson, 2002) could be extended to explaining a three-dimensional (3D) stereokinetic percept by specifying the smoothness term as a preference for minimal deformation. Stereokinetic stimuli are two-dimensional (2D) configurations that lead to 3D percepts when rotated in the image plane. In particular, a rotating ellipse with an eccentric dot gives rise to the percept of a cone with a defined height. In the current study, the spatial relationship between the ellipse and dot varied across trials in terms of the dot's relative location and the aspect ratio of the ellipse. During each trial, participants (n = 8) adjusted the length of a 2D bar centered along the minor axis of the ellipse to indicate their perceived height of the cone. Upon rotation, the 2D bar was perceived to be perpendicular to the circular base of the cone. Our results were qualitatively and quantitatively consistent with the traditional hypothesis of minimum object change (Jansson & Johansson, 1973), which is also similar to the maximal rigidity assumption (Ullman, 1979). As the dot shifted from the major axis towards the minor axis of the ellipse, observers consistently reported an increasingly taller cone. The results illustrate the tendency of observers to perceive the apex of the cone at a height that minimized its 3D distance to the surface normal at the center of the circular base of the cone to reduce the relative motion between the dot and the base of the cone. The current study provides empirical evidence suggesting that, when presented with an ambiguous stereokinetic stimulus, the visual system prefers the interpretation that corresponds to a 3D percept that is slowest and maximally rigid.
Collapse
|
16
|
Hassan O, Georgeson MA, Hammett ST. Brightening and Dimming Aftereffects at Low and High Luminance. Vision (Basel) 2018; 2:vision2020024. [PMID: 31735888 PMCID: PMC6835348 DOI: 10.3390/vision2020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 06/06/2018] [Accepted: 06/11/2018] [Indexed: 11/16/2022] Open
Abstract
Adaptation to a spatially uniform field that increases or decreases in luminance over time yields a “ramp aftereffect”, whereby a steady, uniform luminance appears to dim or brighten, and an appropriate non-uniform test field appears to move. We measured the duration of this aftereffect of adaptation to ascending and descending luminance for a wide range of temporal frequencies and luminance amplitudes. Three types of luminance ramp profiles were used: linear, logarithmic, and exponential. The duration of the motion aftereffect increased as amplitude increased, regardless of the frequency, slope, or ramp profile of the adapting pattern. At low luminance, this result held for ascending luminance adaptation, but the duration of the aftereffect was significantly reduced for descending luminance adaptation. This reduction in the duration of the aftereffect at low luminance is consistent with differential recruitment of temporally tuned cells of the ON and OFF pathways, but the relative independence of the effect from temporal frequency is not.
Collapse
Affiliation(s)
- Omar Hassan
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark A. Georgeson
- School of Life & Health Sciences, Aston University, Birmingham B4 7ET, UK
| | - Stephen T. Hammett
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Correspondence: ; Tel.: +44-1784-443-3702
| |
Collapse
|
17
|
Neural mechanisms underlying sensitivity to reverse-phi motion in the fly. PLoS One 2017; 12:e0189019. [PMID: 29261684 PMCID: PMC5737883 DOI: 10.1371/journal.pone.0189019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 11/18/2017] [Indexed: 01/18/2023] Open
Abstract
Optical illusions provide powerful tools for mapping the algorithms and circuits that underlie visual processing, revealing structure through atypical function. Of particular note in the study of motion detection has been the reverse-phi illusion. When contrast reversals accompany discrete movement, detected direction tends to invert. This occurs across a wide range of organisms, spanning humans and invertebrates. Here, we map an algorithmic account of the phenomenon onto neural circuitry in the fruit fly Drosophila melanogaster. Through targeted silencing experiments in tethered walking flies as well as electrophysiology and calcium imaging, we demonstrate that ON- or OFF-selective local motion detector cells T4 and T5 are sensitive to certain interactions between ON and OFF. A biologically plausible detector model accounts for subtle features of this particular form of illusory motion reversal, like the re-inversion of turning responses occurring at extreme stimulus velocities. In light of comparable circuit architecture in the mammalian retina, we suggest that similar mechanisms may apply even to human psychophysics.
Collapse
|
18
|
Abstract
When observers move the head backwards and forwards while fixating on the center of the concentric circles that consist of oblique lines, they see illusory rotation of those circles. If several dots are superimposed on the proximity to the inner concentric circles, observers see the illusory rotation not only for the circles but also for the superimposed dots. This illusory rotation of the dots is based on motion capture. In this study, in order to understand the basis of the motion capture, we examined how motion signal with different directions (rotation, expansion/contraction, and horizontal translation) in terms of motion on a display, as well as illusory motion signal from the oblique components, affects the motion capture. If the stimulus presented rotation with expansion/contraction, or rotation with horizontal translation for the entire stimulus, then observers tended to perceive motion capture for the superimposed dots. However, if the stimulus presented only rotation of the circles, then observers tended to perceive induced motion for the superimposed dots. These results suggest that the existences of the common fate factor for the entire stimulus determine the means of allocating and integrating the motion signal in each element in the stimulus to generate motion capture.
Collapse
Affiliation(s)
- Makoto Ichikawa
- Department of Psychology, Chiba University, Chiba, Japan; Research Institute for Time Study, Yamaguchi University, Yamaguchi, Japan
| | - Yuko Masakura
- Faculty of Creation and Representation, Aichi Shukutoku University, Aichi, Japan
| |
Collapse
|
19
|
A Neural Model of MST and MT Explains Perceived Object Motion during Self-Motion. J Neurosci 2017; 36:8093-102. [PMID: 27488630 DOI: 10.1523/jneurosci.4593-15.2016] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 06/02/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED When a moving object cuts in front of a moving observer at a 90° angle, the observer correctly perceives that the object is traveling along a perpendicular path just as if viewing the moving object from a stationary vantage point. Although the observer's own (self-)motion affects the object's pattern of motion on the retina, the visual system is able to factor out the influence of self-motion and recover the world-relative motion of the object (Matsumiya and Ando, 2009). This is achieved by using information in global optic flow (Rushton and Warren, 2005; Warren and Rushton, 2009; Fajen and Matthis, 2013) and other sensory arrays (Dupin and Wexler, 2013; Fajen et al., 2013; Dokka et al., 2015) to estimate and deduct the component of the object's local retinal motion that is due to self-motion. However, this account (known as "flow parsing") is qualitative and does not shed light on mechanisms in the visual system that recover object motion during self-motion. We present a simple computational account that makes explicit possible mechanisms in visual cortex by which self-motion signals in the medial superior temporal area interact with object motion signals in the middle temporal area to transform object motion into a world-relative reference frame. The model (1) relies on two mechanisms (MST-MT feedback and disinhibition of opponent motion signals in MT) to explain existing data, (2) clarifies how pathways for self-motion and object-motion perception interact, and (3) unifies the existing flow parsing hypothesis with established neurophysiological mechanisms. SIGNIFICANCE STATEMENT To intercept targets, we must perceive the motion of objects that move independently from us as we move through the environment. Although our self-motion substantially alters the motion of objects on the retina, compelling evidence indicates that the visual system at least partially compensates for self-motion such that object motion relative to the stationary environment can be more accurately perceived. We have developed a model that sheds light on plausible mechanisms within the visual system that transform retinal motion into a world-relative reference frame. Our model reveals how local motion signals (generated through interactions within the middle temporal area) and global motion signals (feedback from the dorsal medial superior temporal area) contribute and offers a new hypothesis about the connection between pathways for heading and object motion perception.
Collapse
|
20
|
Jancke D. Catching the voltage gradient-asymmetric boost of cortical spread generates motion signals across visual cortex: a brief review with special thanks to Amiram Grinvald. NEUROPHOTONICS 2017; 4:031206. [PMID: 28217713 PMCID: PMC5301132 DOI: 10.1117/1.nph.4.3.031206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Accepted: 01/12/2017] [Indexed: 06/06/2023]
Abstract
Wide-field voltage imaging is unique in its capability to capture snapshots of activity-across the full gradient of average changes in membrane potentials from subthreshold to suprathreshold levels-of hundreds of thousands of superficial cortical neurons that are simultaneously active. Here, I highlight two examples where voltage-sensitive dye imaging (VSDI) was exploited to track gradual space-time changes of activity within milliseconds across several millimeters of cortex at submillimeter resolution: the line-motion condition, measured in Amiram Grinvald's Laboratory more than 10 years ago and-coming full circle running VSDI in my laboratory-another motion-inducing condition, in which two neighboring stimuli counterchange luminance simultaneously. In both examples, cortical spread is asymmetrically boosted, creating suprathreshold activity drawn out over primary visual cortex. These rapidly propagating waves may integrate brain signals that encode motion independent of direction-selective circuits.
Collapse
Affiliation(s)
- Dirk Jancke
- Ruhr University Bochum, Optical Imaging Group, Institut für Neuroinformatik, Bochum, Germany
| |
Collapse
|
21
|
Abstract
The perceived speed of a ring of equally spaced dots moving around a circular path appears faster as the number of dots increases (Ho & Anstis, 2013, Best Illusion of the Year contest). We measured this "spinner" effect with radial sinusoidal gratings, using a 2AFC procedure where participants selected the faster one between two briefly presented gratings of different spatial frequencies (SFs) rotating at various angular speeds. Compared with the reference stimulus with 4 c/rev (0.64 c/rad), participants consistently overestimated the angular speed for test stimuli of higher radial SFs but underestimated that for a test stimulus of lower radial SFs. The spinner effect increased in magnitude but saturated rapidly as the test radial SF increased. Similar effects were observed with translating linear sinusoidal gratings of different SFs. Our results support the idea that human speed perception is biased by temporal frequency, which physically goes up as SF increases when the speed is held constant. Hence, the more dots or lines, the greater the perceived speed when they are moving coherently in a defined area.
Collapse
|
22
|
Hughes AE, Jones C, Joshi K, Tolhurst DJ. Diverted by dazzle: perceived movement direction is biased by target pattern orientation. Proc Biol Sci 2017; 284:20170015. [PMID: 28275144 PMCID: PMC5360933 DOI: 10.1098/rspb.2017.0015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 02/09/2017] [Indexed: 11/12/2022] Open
Abstract
'Motion dazzle' is the hypothesis that predators may misjudge the speed or direction of moving prey which have high-contrast patterning, such as stripes. However, there is currently little experimental evidence that such patterns cause visual illusions. Here, observers binocularly tracked a Gabor target, moving with a linear trajectory randomly chosen within 18° of the horizontal. This target then became occluded, and observers were asked to judge where they thought it would later cross a vertical line to the side. We found that internal motion of the stripes within the Gabor biased judgements as expected: Gabors with upwards internal stripe motion relative to the overall direction of motion were perceived to be crossing above Gabors with downwards internal stripe movement. However, surprisingly, we found a much stronger effect of the rigid pattern orientation. Patches with oblique stripes pointing upwards relative to the direction of motion were perceived to cross above patches with downward-pointing stripes. This effect occurred only at high speeds, suggesting that it may reflect an orientation-dependent effect in which spatial signals are used in direction judgements. These findings have implications for our understanding of motion dazzle mechanisms and how human motion and form processing interact.
Collapse
Affiliation(s)
- Anna E Hughes
- Department of Psychology and Language Sciences, University College London, 26 Bedford Way, London WC1H 0AP, UK
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| | - Christian Jones
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| | - Kaustuv Joshi
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| | - David J Tolhurst
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK
| |
Collapse
|
23
|
The Café Wall Illusion: Local and Global Perception from Multiple Scales to Multiscale. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2017. [DOI: 10.1155/2017/8179579] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Geometrical illusions are a subclass of optical illusions in which the geometrical characteristics of patterns in particular orientations and angles are distorted and misperceived as a result of low-to-high-level retinal/cortical processing. Modelling the detection of tilt in these illusions, and its strength, is a challenging task and leads to the development of techniques that explain important features of human perception. We present here a predictive and quantitative approach for modelling foveal and peripheral vision for the induced tilt in the Café Wall illusion, in which parallel mortar lines between shifted rows of black and white tiles appear to converge and diverge. Difference of Gaussians is used to define a bioderived filtering model for the responses of retinal simple cells to the stimulus, while an analytical processing pipeline is developed to quantify the angle of tilt in the model and develop confidence intervals around them. Several sampling sizes and aspect ratios are explored to model variant foveal views, and a variety of pattern configurations are tested to model variant Gestalt views. The analysis of our model across this range of test configurations presents a precisely quantified comparison contrasting local tilt detection in the foveal sample sets with pattern-wide Gestalt tilt.
Collapse
|
24
|
Hogendoorn H, Verstraten FAJ, MacDougall H, Alais D. Vestibular signals of self-motion modulate global motion perception. Vision Res 2016; 130:22-30. [PMID: 27871885 DOI: 10.1016/j.visres.2016.11.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2016] [Revised: 11/08/2016] [Accepted: 11/16/2016] [Indexed: 11/26/2022]
Abstract
Certain visual stimuli can have two possible interpretations. These perceptual interpretations may alternate stochastically, a phenomenon known as bistability. Some classes of bistable stimuli, including binocular rivalry, are sensitive to bias from input through other modalities, such as sound and touch. Here, we address the question whether bistable visual motion stimuli, known as plaids, are affected by vestibular input that is caused by self-motion. In Experiment 1, we show that a vestibular self-motion signal biases the interpretation of the bistable plaid, increasing or decreasing the likelihood of the plaid being perceived as globally coherent or transparently sliding depending on the relationship between self-motion and global visual motion directions. In Experiment 2, we find that when the vestibular direction is orthogonal to the visual direction, the vestibular self-motion signal also biases the direction of one-dimensional motion. This interaction suggests that the effect in Experiment 1 is due to the self-motion vector adding to the visual motion vectors. Together, this demonstrates that the perception of visual motion direction can be systematically affected by concurrent but uninformative and task-irrelevant vestibular input caused by self-motion.
Collapse
Affiliation(s)
- Hinze Hogendoorn
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, The Netherlands; School of Psychology, The University of Sydney, NSW 2006, Australia.
| | - Frans A J Verstraten
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, The Netherlands; School of Psychology, The University of Sydney, NSW 2006, Australia
| | | | - David Alais
- School of Psychology, The University of Sydney, NSW 2006, Australia
| |
Collapse
|
25
|
Waxman AM, Wohn K. Contour Evolution, Neighborhood Deformation, and Global Image Flow: Planar Surfaces in Motion. Int J Rob Res 2016. [DOI: 10.1177/027836498500400307] [Citation(s) in RCA: 91] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In the kinematic analysis of time-varying imagery, where the goal is to recover object surface structure and space motion from image flow, an appropriate representation for the flow field consists of a set of deformation parameters that describe the rate of change of an image neighborhood. In this paper we develop methods for extracting these deformation param eters from evolving contours in an image sequence, the image contours being manifestations of surface texture seen in perspective projection. Our results follow directly from the analytic structure of the underlying image flow; no heuristics are imposed. The deformation parameters we seek are actu ally linear combinations of the Taylor series coefficients (through second derivatives) of the local image flow field. Thus, a by-product of our approach is a second-order polyno mial approximation to the image flow in the neighborhood of a contour. For curved surfaces this approximation is only locally valid, but for planar surfaces it is globally valid (i.e., it is exact). Our analysis reveals an "aperture problem in the large" in which insufficient contour structure leaves the set of 12 deformation parameters underdetermined. We also assess the sensitivity of our method to the simulated effects of noise in the "normal flow" around contours as well as the angular field of view subtended by contours. The sensitivity analysis is carried out in the context of planar surfaces executing general rigid-body motions in space. Future work will address the additional considerations relevant to curved surface patches.
Collapse
Affiliation(s)
- Allen M. Waxman
- Computer Vision Laboratory Center for Automation Research University of Maryland College Park, Maryland 20742
| | - Kwangyoen Wohn
- Computer Vision Laboratory Center for Automation Research University of Maryland College Park, Maryland 20742
| |
Collapse
|
26
|
Temporal Asymmetry in Dark-Bright Processing Initiates Propagating Activity across Primary Visual Cortex. J Neurosci 2016; 36:1902-13. [PMID: 26865614 DOI: 10.1523/jneurosci.3235-15.2016] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Differences between visual pathways representing darks and lights have been shown to affect spatial resolution and detection timing. Both psychophysical and physiological studies suggest an underlying retinal origin with amplification in primary visual cortex (V1). Here we show that temporal asymmetries in the processing of darks and lights create motion in terms of propagating activity across V1. Exploiting the high spatiotemporal resolution of voltage-sensitive dye imaging, we captured population responses to abrupt local changes of luminance in cat V1. For stimulation we used two neighboring small squares presented on either bright or dark backgrounds. When a single square changed from dark to bright or vice versa, we found coherent population activity emerging at the respective retinal input locations. However, faster rising and decay times were obtained for the bright to dark than the dark to bright changes. When the two squares changed luminance simultaneously in opposite polarities, we detected a propagating wave front of activity that originated at the cortical location representing the darkened square and rapidly expanded toward the region representing the brightened location. Thus, simultaneous input led to sequential activation across cortical retinotopy. Importantly, this effect was independent of the squares' contrast with the background. We suggest imbalance in dark-bright processing as a driving force in the generation of wave-like activity. Such propagation may convey motion signals and influence perception of shape whenever abrupt shifts in visual objects or gaze cause counterchange of luminance at high-contrast borders. SIGNIFICANCE STATEMENT An elementary process in vision is the detection of darks and lights through the retina via ON and OFF channels. Psychophysical and physiological studies suggest that differences between these channels affect spatial resolution and detection thresholds. Here we show that temporal asymmetries in the processing of darks and lights create motion signals across visual cortex. Using two neighboring squares, which simultaneously counterchanged luminance, we discovered propagating activity that was strictly drawn out from cortical regions representing the darkened location. Thus, a synchronous stimulus event translated into sequential wave-like brain activation. Such propagation may convey motion signals accessible in higher brain areas, whenever abrupt shifts in visual objects or gaze cause counterchange of luminance at high-contrast borders.
Collapse
|
27
|
Sheppard BM, Pettigrew JD. Plaid Motion Rivalry: Correlates with Binocular Rivalry and Positive Mood State. Perception 2016; 35:157-69. [PMID: 16583762 DOI: 10.1068/p5395] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Recently Hupé and Rubin (2003, Vision Research43 531–548) re-introduced the plaid as a form of perceptual rivalry by using two sets of drifting gratings behind a circular aperture to produce quasi-regular perceptual alternations between a coherent moving plaid of diamond-shaped intersections and the two sets of component ‘sliding’ gratings. We call this phenomenon plaid motion rivalry (PMR), and have compared its temporal dynamics with those of binocular rivalry in a sample of subjects covering a wide range of perceptual alternation rates. In support of the proposal that all rivalries may be mediated by a common switching mechanism, we found a high correlation between alternation rates induced by PMR and binocular rivalry. In keeping with a link discovered between the phase of rivalry and mood, we also found a link between PMR and an individual's mood state that is consistent with suggestions that each opposing phase of rivalry is associated with one or the other hemisphere, with the ‘diamonds’ phase of PMR linked with the ‘positive’ left hemisphere.
Collapse
Affiliation(s)
- Bonita M Sheppard
- Vision Touch and Hearing Research Center, School of Biomedical Sciences, University of Queensland, Australia.
| | | |
Collapse
|
28
|
Todorović D. A Gem from the Past: Pleikart Stumpf's (1911) Anticipation of the Aperture Problem, Reichardt Detectors, and Perceived Motion Loss at Equiluminance. Perception 2016. [DOI: 10.1068/p251235] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Dejan Todorović
- Laboratory of Experimental Psychology, Department of Psychology, University of Belgrade, Čika Ljubina 18-20, 11000 Belgrade, Serbia, Yugoslavia
| |
Collapse
|
29
|
Ichikawa M, Masakura Y, Munechika K. Dependence of Illusory Motion on Directional Consistency in Oblique Components. Perception 2016; 35:933-46. [PMID: 16970202 DOI: 10.1068/p5125] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Pinna and Brelstaff (2000 Vision Research40 2091–2096) reported a motion illusion on viewing two concentric circles consisting of quadrangular components with black and white sides on a grey background. Our results suggest that the illusion is based on the integration of motion signals derived from oblique components, and on the consistency in the direction among those components. Furthermore, arrays of these oblique components can elicit the perception of motion not only for the oblique components themselves, but also for other objects in the picture. We propose that the motion illusion depends not only upon detection of the illusory motion signal at each local oblique component, but also upon the accumulation of the signal all over the stimulus configuration.
Collapse
Affiliation(s)
- Makoto Ichikawa
- Department of Perceptual Sciences & Design Engineering, Yamaguchi University, Ube, Japan.
| | | | | |
Collapse
|
30
|
Abstract
A reference frame is required to specify how motion is perceived. For example, the motion of part of an object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38, 379-393, 1976) proposed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means of which the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset of solutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set of psychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects of motion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective of the reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames.
Collapse
|
31
|
Encoding of yaw in the presence of distractor motion: studies in a fly motion sensitive neuron. J Neurosci 2015; 35:6481-94. [PMID: 25904799 DOI: 10.1523/jneurosci.4256-14.2015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Motion estimation is crucial for aerial animals such as the fly, which perform fast and complex maneuvers while flying through a 3-D environment. Motion-sensitive neurons in the lobula plate, a part of the visual brain, of the fly have been studied extensively for their specialized role in motion encoding. However, the visual stimuli used in such studies are typically highly simplified, often move in restricted ways, and do not represent the complexities of optic flow generated during actual flight. Here, we use combined rotations about different axes to study how H1, a wide-field motion-sensitive neuron, encodes preferred yaw motion in the presence of stimuli not aligned with its preferred direction. Our approach is an extension of "white noise" methods, providing a framework that is readily adaptable to quantitative studies into the coding of mixed dynamic stimuli in other systems. We find that the presence of a roll or pitch ("distractor") stimulus reduces information transmitted by H1 about yaw, with the amount of this reduction depending on the variance of the distractor. Spike generation is influenced by features of both yaw and the distractor, where the degree of influence is determined by their relative strengths. Certain distractor features may induce bidirectional responses, which are indicative of an imbalance between global excitation and inhibition resulting from complex optic flow. Further, the response is shaped by the dynamics of the combined stimulus. Our results provide intuition for plausible strategies involved in efficient coding of preferred motion from complex stimuli having multiple motion components.
Collapse
|
32
|
Krause MR, Pack CC. Contextual modulation and stimulus selectivity in extrastriate cortex. Vision Res 2014; 104:36-46. [PMID: 25449337 DOI: 10.1016/j.visres.2014.10.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2014] [Revised: 10/08/2014] [Accepted: 10/09/2014] [Indexed: 11/26/2022]
Abstract
Contextual modulation is observed throughout the visual system, using techniques ranging from single-neuron recordings to behavioral experiments. Its role in generating feature selectivity within the retina and primary visual cortex has been extensively described in the literature. Here, we describe how similar computations can also elaborate feature selectivity in the extrastriate areas of both the dorsal and ventral streams of the primate visual system. We discuss recent work that makes use of normalization models to test specific roles for contextual modulation in visual cortex function. We suggest that contextual modulation renders neuronal populations more selective for naturalistic stimuli. Specifically, we discuss contextual modulation's role in processing optic flow in areas MT and MST and for representing naturally occurring curvature and contours in areas V4 and IT. We also describe how the circuitry that supports contextual modulation is robust to variations in overall input levels. Finally, we describe how this theory relates to other hypothesized roles for contextual modulation.
Collapse
Affiliation(s)
- Matthew R Krause
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
33
|
Hughes AE, Troscianko J, Stevens M. Motion dazzle and the effects of target patterning on capture success. BMC Evol Biol 2014; 14:201. [PMID: 25213150 PMCID: PMC4172783 DOI: 10.1186/s12862-014-0201-4] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Accepted: 09/09/2014] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Stripes and other high contrast patterns found on animals have been hypothesised to cause "motion dazzle", a type of defensive coloration that operates when in motion, causing predators to misjudge the speed and direction of object movement. Several recent studies have found some support for this idea, but little is currently understood about the mechanisms underlying this effect. Using humans as model 'predators' in a touch screen experiment we investigated further the effectiveness of striped targets in preventing capture, and considered how stripes compare to other types of patterning in order to understand what aspects of target patterning are important in making a target difficult to capture. RESULTS We find that striped targets are among the most difficult to capture, but that other patterning types are also highly effective at preventing capture in this task. Several target types, including background sampled targets and targets with a 'spot' on were significantly easier to capture than striped targets. We also show differences in capture attempt rates between different target types, but we find no differences in learning rates between target types. CONCLUSIONS We conclude that striped targets are effective in preventing capture, but are not uniquely difficult to catch, with luminance matched grey targets also showing a similar capture rate. We show that key factors in making capture easier are a lack of average background luminance matching and having trackable 'features' on the target body. We also find that striped patterns are attempted relatively quickly, despite being difficult to catch. We discuss these findings in relation to the motion dazzle hypothesis and how capture rates may be affected more generally by pattern type.
Collapse
|
34
|
Ashida H, Scott-Samuel NE. Motion influences the perception of background lightness. Iperception 2014; 5:41-9. [PMID: 25165515 PMCID: PMC4130506 DOI: 10.1068/i0628] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Revised: 12/03/2013] [Indexed: 11/20/2022] Open
Abstract
Uniform backgrounds appear lighter or darker when elements containing luminance gradients move across them, a phenomenon first presented by Ko Nakamura at the 2010 Illusion Contest in Japan. We measured the apparent lightness of the background with a configuration where the grey background was overlaid with moving square patches of vertically oriented luminance gradient. For black-to-grey gradients, the background appeared lighter when the black edges were leading than when they were trailing. For white-to-grey gradients, the background appeared darker when the white edges were leading than when they were trailing. For white-to-black gradients, the background appeared darker with a white edge leading and lighter with a dark edge leading, but the effects were weaker. These results demonstrate that lightness contrast can be modulated by the direction of motion of the inducing patterns. The smooth gradient is essential, because the effect disappeared when the black-to-white gradient was replaced with the binary black and white pattern. We speculate that asymmetry in the processing of a temporal gradient with increasing and decreasing contrast, as proposed to explain the “Rotating Snakes” illusion (Murakami, Kitaoka, & Ashida, 2006, Vision Research, 46, 2421–2431), might be the basis for this effect.
Collapse
Affiliation(s)
- Hiroshi Ashida
- Graduate School of Letters, Kyoto University, Kyoto 6068501, Japan; e-mail:
| | - Nicholas E Scott-Samuel
- School of Experimental Psychology, University of Bristol, 12a Priory Road, Clifton, Bristol BS8 1TU, UK; e-mail:
| |
Collapse
|
35
|
Perry CJ, Fallah M. Feature integration and object representations along the dorsal stream visual hierarchy. Front Comput Neurosci 2014; 8:84. [PMID: 25140147 PMCID: PMC4122209 DOI: 10.3389/fncom.2014.00084] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2014] [Accepted: 07/16/2014] [Indexed: 11/13/2022] Open
Abstract
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features.
Collapse
Affiliation(s)
- Carolyn Jeane Perry
- Visual Perception and Attention Laboratory, School of Kinesiology and Health Science, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada
| | - Mazyar Fallah
- Visual Perception and Attention Laboratory, School of Kinesiology and Health Science, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; Departments of Biology and Psychology, York University Toronto, ON, Canada ; Canadian Action and Perception Network, York University Toronto, ON, Canada
| |
Collapse
|
36
|
Global-motion aftereffect does not depend on awareness of the adapting motion direction. Atten Percept Psychophys 2014; 76:766-79. [PMID: 24430562 DOI: 10.3758/s13414-013-0609-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It has been shown that humans cannot perceive more than three directions from a multidirectional motion stimulus. However, it remains unknown whether adapting to such imperceptible motion directions could generate motion aftereffects (MAEs). A series of psychophysical experiments were conducted to address this issue. Using a display consisting of randomly oriented Gabors, we replicated previous findings that observers were unable to perceive the global directions embedded in a five-direction motion pattern. However, adapting to this multidirectional pattern induced both static and dynamic MAEs, despite the fact that observers were unaware of any global motion directions during adaptation. Furthermore, by comparing the strengths of the dynamic MAEs induced at different levels of motion processing, we found that spatial integration of local illusory signals per se was sufficient to produce a significant global MAE. These psychophysical results show that the generation of a directional global MAE does not require conscious perception of the global motion during adaptation.
Collapse
|
37
|
Van Hooser SD, Escobar GM, Maffei A, Miller P. Emerging feed-forward inhibition allows the robust formation of direction selectivity in the developing ferret visual cortex. J Neurophysiol 2014; 111:2355-73. [PMID: 24598528 PMCID: PMC4099478 DOI: 10.1152/jn.00891.2013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 03/03/2014] [Indexed: 11/22/2022] Open
Abstract
The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition.
Collapse
Affiliation(s)
- Stephen D Van Hooser
- Department of Biology, Brandeis University, Waltham, Massachusetts; Sloan-Swartz Center for Theoretical Neurobiology, Brandeis University, Waltham, Massachusetts; Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts;
| | - Gina M Escobar
- Department of Biology, Brandeis University, Waltham, Massachusetts; Sloan-Swartz Center for Theoretical Neurobiology, Brandeis University, Waltham, Massachusetts; Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts
| | - Arianna Maffei
- Department of Neurobiology and Behavior, State University of New York-Stony Brook, Stony Brook, New York; and SUNY Eye Institute, State University of New York-Stony Brook, Stony Brook, New York
| | - Paul Miller
- Department of Biology, Brandeis University, Waltham, Massachusetts; Sloan-Swartz Center for Theoretical Neurobiology, Brandeis University, Waltham, Massachusetts; Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts
| |
Collapse
|
38
|
An X, Gong H, McLoughlin N, Yang Y, Wang W. The mechanism for processing random-dot motion at various speeds in early visual cortices. PLoS One 2014; 9:e93115. [PMID: 24682033 PMCID: PMC3969330 DOI: 10.1371/journal.pone.0093115] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2013] [Accepted: 03/03/2014] [Indexed: 11/18/2022] Open
Abstract
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing.
Collapse
Affiliation(s)
- Xu An
- CAS Key Laboratory of Brain Function and Diseases, School of Life Sciences, University of Science and Technology of China, Hefei, P. R. China; Institute of Neuroscience and State Key Laboratory of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, P. R. China
| | - Hongliang Gong
- Institute of Neuroscience and State Key Laboratory of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, P. R. China
| | - Niall McLoughlin
- Faculty of Life Sciences, University of Manchester, Manchester, United Kingdom
| | - Yupeng Yang
- CAS Key Laboratory of Brain Function and Diseases, School of Life Sciences, University of Science and Technology of China, Hefei, P. R. China
| | - Wei Wang
- Institute of Neuroscience and State Key Laboratory of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, P. R. China
| |
Collapse
|
39
|
Contrasting accounts of direction and shape perception in short-range motion: Counterchange compared with motion energy detection. Atten Percept Psychophys 2014; 76:1350-70. [PMID: 24634030 DOI: 10.3758/s13414-014-0650-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It has long been thought (e.g., Cavanagh & Mather, 1989) that first-order motion-energy extraction via space-time comparator-type models (e.g., the elaborated Reichardt detector) is sufficient to account for human performance in the short-range motion paradigm (Braddick, 1974), including the perception of reverse-phi motion when the luminance polarity of the visual elements is inverted during successive frames. Human observers' ability to discriminate motion direction and use coherent motion information to segregate a region of a random cinematogram and determine its shape was tested; they performed better in the same-, as compared with the inverted-, polarity condition. Computational analyses of short-range motion perception based on the elaborated Reichardt motion energy detector (van Santen & Sperling, 1985) predict, incorrectly, that symmetrical results will be obtained for the same- and inverted-polarity conditions. In contrast, the counterchange detector (Hock, Schöner, & Gilroy, 2009) predicts an asymmetry quite similar to that of human observers in both motion direction and shape discrimination. The further advantage of counterchange, as compared with motion energy, detection for the perception of spatial shape- and depth-from-motion is discussed.
Collapse
|
40
|
Abstract
Wertheimer, M. (Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 61:161-265, 1912) classical distinction between beta (object) and phi (objectless) motion is elaborated here in a series of experiments concerning competition between two qualitatively different motion percepts, induced by sequential changes in luminance for two-dimensional geometric objects composed of rectangular surfaces. One of these percepts is of spreading-luminance motion that continuously sweeps across the entire object; it exhibits shape invariance and is perceived most strongly for fast speeds. Significantly for the characterization of phi as objectless motion, the spreading luminance does not involve surface boundaries or any other feature; the percept is driven solely by spatiotemporal changes in luminance. Alternatively, and for relatively slow speeds, a discrete series of edge motions can be perceived in the direction opposite to spreading-luminance motion. Akin to beta motion, the edges appear to move through intermediate positions within the object's changing surfaces. Significantly for the characterization of beta as object motion, edge motion exhibits shape dependence and is based on the detection of oppositely signed changes in contrast (i.e., counterchange) for features essential to the determination of an object's shape, the boundaries separating its surfaces. These results are consistent with area MT neurons that differ with respect to speed preference Newsome et al (Journal of Neurophysiology, 55:1340-1351, 1986) and shape dependence Zeki (Journal of Physiology, 236:549-573, 1974).
Collapse
|
41
|
Huang LT, Wong AMK, Chen CPC, Chang WH, Cheng JW, Lin YR, Pei YC. Global motion percept mediated through integration of barber poles presented in bilateral visual hemifields. PLoS One 2013; 8:e74032. [PMID: 24009764 PMCID: PMC3756956 DOI: 10.1371/journal.pone.0074032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2012] [Accepted: 08/01/2013] [Indexed: 11/21/2022] Open
Abstract
How is motion information that has been obtained through multiple viewing apertures integrated to form a global motion percept? We investigated the mechanisms of motion integration across apertures in two hemifields by presenting gratings through two rectangles (that form the dual barber poles) and recording the perceived direction of motion by human observers. To this end, we presented dual barber poles in conditions with various inter-component distances between the apertures and evaluated the degree to which the hemifield information was integrated by measuring the magnitude of the perceived barber pole illusion. Surprisingly, when the inter-component distance between the two apertures was short, the perceived direction of motion of the dual barber poles was similar to that of a single barber pole formed by the concatenation of the two component barber poles, indicating motion integration is achieved through a simple concatenation mechanism. We then presented dual barber poles in which the motion and contour properties of the two component barber poles differed to characterize the constraints underlying cross-hemifield integration. We found that integration is achieved only when phase, speed, wavelength, temporal frequency, and duty cycle are identical in the two barber poles, but can remain robust when the contrast of the two component barber poles differs substantially. We concluded that a motion stimulus presented in bilateral hemifields tends to be integrated to yield a global percept with a substantial tolerance for spatial distance and contrast difference.
Collapse
Affiliation(s)
- Li-Ting Huang
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Alice M. K. Wong
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Carl P. C. Chen
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
- School of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wei-Han Chang
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Ju-Wen Cheng
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Yu-Ru Lin
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Yu-Cheng Pei
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
- School of Medicine, Chang Gung University, Taoyuan, Taiwan
- Healthy Aging Research Center, Chang Gung University, Taoyuan, Taiwan
- * E-mail:
| |
Collapse
|
42
|
Blair CD, Goold J, Killebrew K, Caplovitz GP. Form features provide a cue to the angular velocity of rotating objects. J Exp Psychol Hum Percept Perform 2013; 40:116-28. [PMID: 23750970 DOI: 10.1037/a0033055] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As an object rotates, each location on the object moves with an instantaneous linear velocity, dependent upon its distance from the center of rotation, whereas the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different-sized objects, as changing the size of an object changes the linear velocity of each location on the object's surface, while maintaining the object's angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high-contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object.
Collapse
|
43
|
Abstract
The thalamus is crucial in determining the sensory information conveyed to cortex. In the visual system, the thalamic lateral geniculate nucleus (LGN) is generally thought to encode simple center-surround receptive fields, which are combined into more sophisticated features in cortex, such as orientation and direction selectivity. However, recent evidence suggests that a more diverse set of retinal ganglion cells projects to the LGN. We therefore used multisite extracellular recordings to define the repertoire of visual features represented in the LGN of mouse, an emerging model for visual processing. In addition to center-surround cells, we discovered a substantial population with more selective coding properties, including direction and orientation selectivity, as well as neurons that signal absence of contrast in a visual scene. The direction and orientation selective neurons were enriched in regions that match the termination zones of direction selective ganglion cells from the retina, suggesting a source for their tuning. Together, these data demonstrate that the mouse LGN contains a far more elaborate representation of the visual scene than current models posit. These findings should therefore have a significant impact on our understanding of the computations performed in mouse visual cortex.
Collapse
|
44
|
Howe PDL, Holcombe AO, Lapierre MD, Cropper SJ. Visually Tracking and Localizing Expanding and Contracting Objects. Perception 2013; 42:1281-300. [DOI: 10.1068/p7635] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The maintenance of attention on moving objects is required for cognition to reliably engage with the visual world. Theories of object tracking need to explain on which patterns of visual stimulation one can easily maintain attention and on which patterns one cannot. A previous study has shown that it is easier to track rigid objects than objects that expand and contract along their direction of motion, in a manner that resembles a substance pouring from one location to another (vanMarle and Scholl 2003 Psychological Science14 498–504). Here we investigate six possible explanations for this finding and find evidence supporting two of them. Our results show that, first, objects that expand and contract tend to overlap and crowd each other more, and this increases tracking difficulty. Second, expansion and contraction make it harder to localize objects, even when there is only a single target to attend to, and this may also increase tracking difficulty. Currently, there is no theory of object tracking that can account for the second finding.
Collapse
Affiliation(s)
- Piers D L Howe
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, VIC 3010, Australia
| | | | - Mark D Lapierre
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, VIC 3010, Australia
| | - Simon J Cropper
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, VIC 3010, Australia
| |
Collapse
|
45
|
Distinct functional organizations for processing different motion signals in V1, V2, and V4 of macaque. J Neurosci 2012; 32:13363-79. [PMID: 23015427 DOI: 10.1523/jneurosci.1900-12.2012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Motion perception is qualitatively invariant across different objects and forms, namely, the same motion information can be conveyed by many different physical carriers, and it requires the processing of motion signals consisting of direction, speed, and axis or trajectory of motion defined by a moving object. Compared with the representation of orientation, the cortical processing of these different motion signals within the early ventral visual pathway of the primate remains poorly understood. Using drifting full-field noise stimuli and intrinsic optical imaging, along with cytochrome-oxidase staining, we found that the orientation domains in macaque V1, V2, and V4 that processed orientation signals also served to process motion signals associated with the axis and speed of motion. In contrast, direction domains within the thick stripes of V2 demonstrated preferences that were independent of motion speed. The population responses encoding the orientation and motion axis could be precisely reproduced by a spatiotemporal energy model. Thus, our observation of orientation domains with dual functions in V1, V2, and V4 directly support the notion that the linear representation of the temporal series of retinotopic activations may serve as another motion processing strategy in primate ventral visual pathway, contributing directly to fine form and motion analysis. Our findings further reveal that different types of motion information are differentially processed in parallel and segregated compartments within primate early visual cortices, before these motion features are fully combined in high-tier visual areas.
Collapse
|
46
|
Lin Z, He S. Emergent filling in induced by motion integration reveals a high-level mechanism in filling in. Psychol Sci 2012; 23:1534-41. [PMID: 23085642 PMCID: PMC3875405 DOI: 10.1177/0956797612446348] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The visual system is intelligent--it is capable of recovering a coherent surface from an incomplete one, a feat known as perceptual completion or filling in. Traditionally, it has been assumed that surface features are interpolated in a way that resembles the fragmented parts. Using displays featuring four circular apertures, we showed in the study reported here that a distinct completed feature (horizontal motion) arises from local ones (oblique motions)-we term this process emergent filling in. Adaptation to emergent filling-in motion generated a dynamic motion aftereffect that was not due to spreading of local motion from the isolated apertures. The filling-in motion aftereffect occurred in both modal and amodal completions, and it was modulated by selective attention. These findings highlight the importance of high-level interpolation processes in filling in and are consistent with the idea that during emergent filling in, the more cognitive-symbolic processes in later areas (e.g., the middle temporal visual area and the lateral occipital complex) provide important feedback signals to guide more isomorphic processes in earlier areas (V1 and V2).
Collapse
Affiliation(s)
- Zhicheng Lin
- Department of Psychology, University of Minnesota, Twin Cities, USA.
| | | |
Collapse
|
47
|
Hock HS, Nichols DF. Motion perception induced by dynamic grouping: a probe for the compositional structure of objects. Vision Res 2012; 59:45-63. [PMID: 22391512 DOI: 10.1016/j.visres.2011.11.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2011] [Revised: 11/14/2011] [Accepted: 11/15/2011] [Indexed: 10/28/2022]
Abstract
A new method is described for determining how the visual system resolves ambiguities in the compositional structure of multi-surface objects; i.e., how the surfaces of objects are grouped together to form a hierarchical structure. The method entails dynamic grouping motion, a high level process in which changes in a surface (e.g., increases or decreases in its luminance, hue or texture) transiently perturb its affinity with adjacent surfaces. Affinity is determined by the combined effects of Gestalt and other grouping variables in indicating that a pair of surfaces forms a subunit within an object's compositional structure. Such pre-perturbation surface groupings are indicated by the perception of characteristic motions across the changing surface. When the affinity of adjacent surfaces is increased by a dynamic grouping variable, their grouping is transiently strengthened; the perceived motion is away from their boundary. When the affinity of adjacent surfaces is decreased, their grouping is transiently weakened; the perceived motion is toward the surfaces' boundary. It is shown that the affinity of adjacent surfaces depends on the nonlinear, super-additive combination of affinity values ascribable to individual grouping variables, and the effect of dynamic grouping variables on motion perception depends on the prior, pre-perturbation affinity state of the surfaces. It is proposed that affinity-based grouping of an object's surfaces must be consistent with the activation of primitive three-dimensional object components in order for the object to be recognized. Also discussed is the potential use of dynamic grouping for determining the compositional structure of multi-object scenes.
Collapse
Affiliation(s)
- Howard S Hock
- Department of Psychology, Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, FL 33431, USA.
| | | |
Collapse
|
48
|
Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback. PLoS One 2011; 6:e29414. [PMID: 22216275 PMCID: PMC3245272 DOI: 10.1371/journal.pone.0029414] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2011] [Accepted: 11/28/2011] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF). DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms) during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.
Collapse
|
49
|
Purves D, Wojtach WT, Lotto RB. Understanding vision in wholly empirical terms. Proc Natl Acad Sci U S A 2011; 108 Suppl 3:15588-95. [PMID: 21383192 PMCID: PMC3176612 DOI: 10.1073/pnas.1012178108] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
This article considers visual perception, the nature of the information on which perceptions seem to be based, and the implications of a wholly empirical concept of perception and sensory processing for vision science. Evidence from studies of lightness, brightness, color, form, and motion all indicate that, because the visual system cannot access the physical world by means of retinal light patterns as such, what we see cannot and does not represent the actual properties of objects or images. The phenomenology of visual perceptions can be explained, however, in terms of empirical associations that link images whose meanings are inherently undetermined to their behavioral significance. Vision in these terms requires fundamentally different concepts of what we see, why, and how the visual system operates.
Collapse
Affiliation(s)
- Dale Purves
- Center for Cognitive Neuroscience, Department of Neurobiology, Duke University, Duke-National University of Singapore Graduate Medical School, Singapore 169857.
| | | | | |
Collapse
|
50
|
Beck C, Neumann H. Combining feature selection and integration--a neural model for MT motion selectivity. PLoS One 2011; 6:e21254. [PMID: 21814543 PMCID: PMC3140976 DOI: 10.1371/journal.pone.0021254] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2010] [Accepted: 05/26/2011] [Indexed: 11/18/2022] Open
Abstract
Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour.
Collapse
Affiliation(s)
- Cornelia Beck
- Institute of Neural Information Processing, University of Ulm, Ulm, Germany.
| | | |
Collapse
|