1
|
Chakrala AS, Xiao J, Huang X. The role of binocular disparity and attention in the neural representation of multiple moving stimuli in the visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.25.546480. [PMID: 37425944 PMCID: PMC10327011 DOI: 10.1101/2023.06.25.546480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
A fundamental process of vision involves segmenting visual scenes into distinct objects and surfaces. Stereoscopic depth and visual motion are important cues for segmentation. However, the mechanisms by which the visual system utilizes depth and motion cues to segment multiple objects are not fully understood. We investigated how neurons in the middle-temporal (MT) cortex of macaque monkeys represented overlapping surfaces located at different depths and moved simultaneously in different directions. We recorded neuronal activities in the MT of three male monkeys while they performed discrimination tasks under different attention conditions. We found that neuronal responses to overlapping surfaces showed a robust bias toward the horizontal binocular disparity of one of the two surfaces. Across all animals, the disparity bias of a neuron in response to two surfaces positively correlated with the neuron's disparity preference for a single surface. For two animals, neurons that preferred near disparities of single surfaces (near neurons) showed a near bias to overlapping stimuli, whereas neurons that preferred far disparities (far neurons) showed a far bias. For the third animal, both near and far neurons displayed a near bias, although the near neurons showed a stronger near bias than the far neurons. Interestingly, for all three animals, both near and far neurons exhibited an initial near bias relative to the average of the responses to the individual surfaces. Although attention can modulate neuronal response to better represent the attended surface, the disparity bias was not due to attention. We also found that the effect of attention modulation on MT responses was consistent with object-based rather than feature-based attention. We proposed a model in which the pool size of the neuron population that weighs the responses to individual stimulus components can be variable. This model is a novel extension of the standard normalization model and provides a unified explanation for the disparity bias observed across animals. Our results revealed the neural encoding rule for multiple stimuli located at different depths and presented new evidence of response modulation by object-based attention in MT. The disparity bias allows subgroups of neurons to preferentially represent individual surfaces of multiple stimuli at different depths, thereby facilitating segmentation.
Collapse
|
2
|
Sousa T, Sayal A, Duarte JV, Costa GN, Castelo-Branco M. A human cortical adaptive mutual inhibition circuit underlying competition for perceptual decision and repetition suppression reversal. Neuroimage 2024; 285:120488. [PMID: 38065278 DOI: 10.1016/j.neuroimage.2023.120488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/17/2023] [Accepted: 12/05/2023] [Indexed: 01/13/2024] Open
Abstract
A model based on inhibitory coupling has been proposed to explain perceptual oscillations. This 'adapting reciprocal inhibition' model postulates that it is the strength of inhibitory coupling that determines the fate of competition between percepts. Here, we used an fMRI-based adaptation technique to reveal the influence of neighboring neuronal populations, such as reciprocal inhibition, in motion-selective hMT+/V5. If reciprocal inhibition exists in this region, the following predictions should hold: 1. stimulus-driven response would not simply decrease, as predicted by simple repetition-suppression of neuronal populations, but instead, increase due to the activity from adjacent populations; 2. perceptual decision involving competing representations, should reflect decreased reciprocal inhibition by adaptation; 3. neural activity for the competing percept should also later on increase upon adaptation. Our results confirm these three predictions, showing that a model of perceptual decision based on adapting reciprocal inhibition holds true. Finally, they also show that the net effect of the well-known repetition suppression phenomenon can be reversed by this mechanism.
Collapse
Affiliation(s)
- Teresa Sousa
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Portugal; Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal
| | - Alexandre Sayal
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Portugal; Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal; Siemens Healthineers, Portugal
| | - João V Duarte
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Portugal; Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal; Faculty of Medicine, University of Coimbra, Portugal
| | - Gabriel N Costa
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Portugal; Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Portugal; Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal; Faculty of Medicine, University of Coimbra, Portugal; Faculty of Psychology and Neuroscience, University of Maastricht, the Kingdom of the Netherlands.
| |
Collapse
|
3
|
Huang X, Ghimire B, Chakrala AS, Wiesner S. Neural encoding of multiple motion speeds in visual cortical area MT. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.08.532456. [PMID: 37070082 PMCID: PMC10107747 DOI: 10.1101/2023.04.08.532456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/19/2023]
Abstract
Segmenting objects from each other and their background is critical for vision. The speed at which objects move provides a salient cue for segmentation. However, how the visual system represents and differentiates multiple speeds is largely unknown. Here we investigated the neural encoding of multiple speeds of overlapping stimuli in the primate visual cortex. We first characterized the perceptual capacity of human and monkey subjects to segment spatially overlapping stimuli moving at different speeds. We then determined how neurons in the motion-sensitive, middle-temporal (MT) cortex of macaque monkeys encode multiple speeds. We made a novel finding that the responses of MT neurons to two speeds of overlapping stimuli showed a robust bias toward the faster speed component when both speeds were slow (≤ 20°/s). The faster-speed bias occurred even when a neuron had a slow preferred speed and responded more strongly to the slower component than the faster component when presented alone. The faster-speed bias emerged very early in neuronal response and was robust over time and to manipulations of motion direction and attention. As the stimulus speed increased, the faster-speed bias changed to response averaging. Our finding can be explained by a modified divisive normalization model, in which the weights for the speed components are proportional to the responses of a population of neurons elicited by the individual speeds. Our results suggest that the neuron population, referred to as the weighting pool, includes neurons that have a broad range of speed preferences. As a result, the response weights for the speed components are determined by the stimulus speeds and invariant to the speed preferences of individual neurons. Our findings help to define the neural encoding rule of multiple stimuli and provide new insight into the underlying neural mechanisms. The faster-speed bias would benefit behavioral tasks such as figure-ground segregation if figural objects tend to move faster than the background in the natural environment.
Collapse
Affiliation(s)
- Xin Huang
- Department of Neuroscience, University of Wisconsin-Madison, Wisconsin 53705, USA
| | - Bikalpa Ghimire
- Department of Neuroscience, University of Wisconsin-Madison, Wisconsin 53705, USA
| | | | - Steven Wiesner
- Department of Neuroscience, University of Wisconsin-Madison, Wisconsin 53705, USA
| |
Collapse
|
4
|
Hirano R, Numasawa K, Yoshimura Y, Miyamoto T, Kizuka T, Ono S. The effect of eccentricity on visual motion prediction in peripheral vision. Physiol Rep 2023; 11:e15877. [PMID: 37985195 PMCID: PMC10659946 DOI: 10.14814/phy2.15877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/10/2023] [Accepted: 11/10/2023] [Indexed: 11/22/2023] Open
Abstract
The purpose of the current study was to clarify the effect of eccentricity on visual motion prediction using a time-to-contact (TTC) task. TTC indicates the predictive ability to accurately estimate the time-to-contact of a moving object based on visual motion perception. We also measured motion reaction time (motion RT) as an indicator of the speed of visual motion perception. The TTC task was to press a button when the moving target would arrive at the stationary goal. In the occluded condition, the target dot was occluded 500 ms before the time to contact. The motion RT task was to press a button as soon as the target moved. The visual targets were randomly presented at five different eccentricities (4°, 6°, 8°, 10°, 12°) and moved on a circular trajectory at a constant tangent velocity (8°/s) to keep the eccentricity constant. Our results showed that TTC in the occluded condition showed an earlier response as the eccentricity increased. Furthermore, the motion RT became longer as the eccentricity increased. Therefore, it is most likely that a slower speed perception in peripheral vision delays the perceived speed of motion onset and leads to an earlier response in the TTC task.
Collapse
Affiliation(s)
- Riku Hirano
- Graduate School of Comprehensive Human SciencesUniversity of TsukubaIbarakiJapan
| | - Kosuke Numasawa
- Graduate School of Comprehensive Human SciencesUniversity of TsukubaIbarakiJapan
| | - Yusei Yoshimura
- Graduate School of Comprehensive Human SciencesUniversity of TsukubaIbarakiJapan
| | - Takeshi Miyamoto
- Graduate School of MedicineKyoto UniversityKyotoJapan
- Japan Society for the Promotion of ScienceTokyoJapan
| | - Tomohiro Kizuka
- Institute of Health and Sport SciencesUniversity of TsukubaIbarakiJapan
| | - Seiji Ono
- Institute of Health and Sport SciencesUniversity of TsukubaIbarakiJapan
| |
Collapse
|
5
|
Kirkels LAMH, Zhang W, Rezvani Z, van Wezel RJA, van Wanrooij MM. Visual motion integration of bidirectional transparent motion in mouse opto-locomotor reflexes. Sci Rep 2021; 11:10490. [PMID: 34006985 PMCID: PMC8131598 DOI: 10.1038/s41598-021-89974-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 04/27/2021] [Indexed: 11/09/2022] Open
Abstract
Visual motion perception depends on readout of direction selective sensors. We investigated in mice whether the response to bidirectional transparent motion, activating oppositely tuned sensors, reflects integration (averaging) or winner-take-all (mutual inhibition) mechanisms. We measured whole body opto-locomotor reflexes (OLRs) to bidirectional oppositely moving random dot patterns (leftward and rightward) and compared the response to predictions based on responses to unidirectional motion (leftward or rightward). In addition, responses were compared to stimulation with stationary patterns. When comparing OLRs to bidirectional and unidirectional conditions, we found that the OLR to bidirectional motion best fits an averaging model. These results reflect integration mechanisms in neural responses to contradicting sensory evidence as has been documented for other sensory and motor domains.
Collapse
Affiliation(s)
- L A M H Kirkels
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands.
| | - W Zhang
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| | - Z Rezvani
- School of Computer Science, Institute for Research in Fundamental Sciences, Tehran, Iran
| | - R J A van Wezel
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands.,Biomedical Signals and Systems, TechMed Centre, Twente University, Enschede, The Netherlands
| | - M M van Wanrooij
- Department of Biophysics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
But Still It Moves: Static Image Statistics Underlie How We See Motion. J Neurosci 2020; 40:2538-2552. [PMID: 32054676 PMCID: PMC7083528 DOI: 10.1523/jneurosci.2760-19.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 12/18/2019] [Accepted: 01/09/2020] [Indexed: 12/03/2022] Open
Abstract
Seeing movement promotes survival. It results from an uncertain interplay between evolution and experience, making it hard to isolate the drivers of computational architectures found in brains. Here we seek insight into motion perception using a neural network (MotionNet) trained on moving images to classify velocity. The network recapitulates key properties of motion direction and speed processing in biological brains, and we use it to derive, and test, understanding of motion (mis)perception at the computational, neural, and perceptual levels. We show that diverse motion characteristics are largely explained by the statistical structure of natural images, rather than motion per se. First, we show how neural and perceptual biases for particular motion directions can result from the orientation structure of natural images. Second, we demonstrate an interrelation between speed and direction preferences in (macaque) MT neurons that can be explained by image autocorrelation. Third, we show that natural image statistics mean that speed and image contrast are related quantities. Finally, using behavioral tests (humans, both sexes), we show that it is knowledge of the speed-contrast association that accounts for motion illusions, rather than the distribution of movements in the environment (the “slow world” prior) as premised by Bayesian accounts. Together, this provides an exposition of motion speed and direction estimation, and produces concrete predictions for future neurophysiological experiments. More broadly, we demonstrate the conceptual value of marrying artificial systems with biological characterization, moving beyond “black box” reproduction of an architecture to advance understanding of complex systems, such as the brain. SIGNIFICANCE STATEMENT Using an artificial systems approach, we show that physiological properties of motion can result from natural image structure. In particular, we show that the anisotropic distribution of orientations in natural statistics is sufficient to explain the cardinal bias for motion direction. We show that inherent autocorrelation in natural images means that speed and direction are related quantities, which could shape the relationship between speed and direction tuning of MT neurons. Finally, we show that movement speed and image contrast are related in moving natural images, and that motion misperception can be explained by this speed-contrast association not a “slow world” prior.
Collapse
|
7
|
Karşılar H, Kısa YD, Balcı F. Dilation and Constriction of Subjective Time Based on Observed Walking Speed. Front Psychol 2018; 9:2565. [PMID: 30627109 PMCID: PMC6309241 DOI: 10.3389/fpsyg.2018.02565] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 11/29/2018] [Indexed: 11/13/2022] Open
Abstract
The physical properties of events are known to modulate perceived time. This study tested the effect of different quantitative (walking speed) and qualitative (walking-forward vs. walking-backward) features of observed motion on time perception in three complementary experiments. Participants were tested in the temporal discrimination (bisection) task, in which they were asked to categorize durations of walking animations as "short" or "long." We predicted the faster observed walking to speed up temporal integration and thereby to shift the point of subjective equality leftward, and this effect to increase monotonically with increasing walking speed. To this end, we tested participants with two different ranges of walking speeds in Experiment 1 and 2 and observed a parametric effect of walking speed on perceived time irrespective of the direction of walking (forward vs. rewound forward walking). Experiment 3 contained a more plausible backward walking animation compared to the rewound walking animation used in Experiments 1 and 2 (as validated based on independent subjective ratings). The effect of walking-speed and the lack of the effect of walking direction on perceived time were replicated in Experiment 3. Our results suggest a strong link between the speed but not the direction of perceived biological motion and subjective time.
Collapse
Affiliation(s)
- Hakan Karşılar
- Department of Psychology, Koç University, Istanbul, Turkey
- Department of Psychology, Özyeğin University, Istanbul, Turkey
| | | | - Fuat Balcı
- Department of Psychology, Koç University, Istanbul, Turkey
- Koç University Center for Translational Medicine, Istanbul, Turkey
| |
Collapse
|
8
|
Abstract
The ability to judge speed is a fundamental aspect of visual motion processing. Speed judgments are generally assumed to depend on signals in motion-sensitive, directionally selective, neurons in areas such as V1 and MT. Speed comparisons might therefore be expected to be most accurate when they use information within a common set of directionally tuned neurons. However, there does not appear to be any published evidence on how well speeds can be compared for movements in different directions. We tested speed discrimination judgments between pairs of random-dot stimuli presented side-by-side in a series of four experiments (n = 65). Participants judged which appeared faster of a reference stimulus moving along the cardinal or oblique axis and a comparison stimulus moving either in the same direction or in a different direction. The bias (point of subjective equality) and sensitivity (Weber fraction) were estimated from individual psychometric functions fitted for each condition. There was considerable between-participants variability in psychophysical estimates across conditions. Nonetheless, participants generally made more acute comparisons between stimuli moving in the same direction than those moving in different directions, at least for conditions with an upwards reference (∼20% difference in Weber fractions). We also showed evidence for an oblique effect in speed discrimination when comparing stimuli moving in the same direction, and a bias whereby oblique motion tended to be perceived as moving faster than cardinal motion. These results demonstrate interactions between speed and direction processing, thus informing our understanding of how they are represented in the brain.
Collapse
Affiliation(s)
- Catherine Manning
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Oliver Braddick
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
9
|
Rocchi F, Ledgeway T, Webb BS. Criterion-free measurement of motion transparency perception at different speeds. J Vis 2018; 18:5. [PMID: 29614154 PMCID: PMC5886031 DOI: 10.1167/18.4.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception.
Collapse
Affiliation(s)
- Francesca Rocchi
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| | - Timothy Ledgeway
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| | - Ben S Webb
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
10
|
Abstract
Primates use frequent, rapid eye movements to sample their visual environment. This is a fruitful strategy to make the best use of the highly sensitive foveal part of the retina, but it requires neural mechanisms to bind the rapidly changing visual input into a single, stable percept. Studies investigating these neural mechanisms have typically assumed that perisaccadic perception in nonhuman primates matches that of humans. We tested this assumption by performing identical experiments in human and nonhuman primates. Our data confirm that perisaccadic visual perception of macaques and humans is qualitatively similar. Specifically, we found a reduction in detectability and mislocalization of targets presented at the time of saccades. We also found substantial differences between human and nonhuman primates. Notably, in nonhuman primates, localization that requires knowledge of eye position was less precise, nonhuman primates detected fewer perisaccadic stimuli, and perisaccadic compression was not towards the saccade target. The qualitative similarities between species support the view that the nonhuman primate is ideally suited to study aspects of brain function—such as those relying on foveal vision—that are uniquely developed in primates. The quantitative differences, however, demonstrate the need for a reassessment of the models purportedly linking neural response changes at the time of saccades with the behavioral phenomena of perisaccadic reduction of detectability and mislocalization.
Collapse
Affiliation(s)
- Steffen Klingenhoefer
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, USA
| |
Collapse
|
11
|
Abstract
Sensory neurons gather evidence in favor of the specific stimuli to which they are tuned, but they could improve their sensitivity by also taking counterevidence into account. The Bours-Lankheet model for motion detection uses counterevidence that relies on a specific combination of the ON and OFF channels in the early visual system. Specifically, the model detects pairs of flashes that occur separated in space and time. If the flashes have the same contrast polarity, they are interpreted as evidence in favor of the corresponding motion. But if they have opposite contrasts, they are interpreted as evidence against it. This mechanism provides an explanation for reverse-phi (the perceived reversal of an apparent motion stimulus due to periodic contrast-inversions) that is a conceptual departure from the standard explanations of the effect. Here, we investigate this counterevidence mechanism by measuring directional tuning curves of neurons in the primary visual and middle temporal cortex areas of awake, behaving macaques using constant-contrast and inverting-contrast moving dot stimuli. Our electrophysiological data support the Bours-Lankheet model and suggest that the counterevidence computation occurs at an early stage of neural processing not captured by the standard models.
Collapse
Affiliation(s)
- Jacob Duijnhouwer
- Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ 07102, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ 07102, USA
| |
Collapse
|
12
|
Chuang J, Ausloos EC, Schwebach CA, Huang X. Integration of motion energy from overlapping random background noise increases perceived speed of coherently moving stimuli. J Neurophysiol 2016; 116:2765-2776. [PMID: 27683893 DOI: 10.1152/jn.01068.2015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2015] [Accepted: 09/27/2016] [Indexed: 11/22/2022] Open
Abstract
The perception of visual motion can be profoundly influenced by visual context. To gain insight into how the visual system represents motion speed, we investigated how a background stimulus that did not move in a net direction influenced the perceived speed of a center stimulus. Visual stimuli were two overlapping random-dot patterns. The center stimulus moved coherently in a fixed direction, whereas the background stimulus moved randomly. We found that human subjects perceived the speed of the center stimulus to be significantly faster than its veridical speed when the background contained motion noise. Interestingly, the perceived speed was tuned to the noise level of the background. When the speed of the center stimulus was low, the highest perceived speed was reached when the background had a low level of motion noise. As the center speed increased, the peak perceived speed was reached at a progressively higher background noise level. The effect of speed overestimation required the center stimulus to overlap with the background. Increasing the background size within a certain range enhanced the effect, suggesting spatial integration. The speed overestimation was significantly reduced or abolished when the center stimulus and the background stimulus had different colors, or when they were placed at different depths. When the center- and background-stimuli were perceptually separable, speed overestimation was correlated with perceptual similarity between the center- and background-stimuli. These results suggest that integration of motion energy from random motion noise has a significant impact on speed perception. Our findings put new constraints on models regarding the neural basis of speed perception.
Collapse
Affiliation(s)
- Jason Chuang
- Department of Neuroscience, School of Medical and Public Health, McPherson Eye Research Institute, University of Wisconsin-Madison, Madison, Wisconsin
| | - Emily C Ausloos
- Department of Neuroscience, School of Medical and Public Health, McPherson Eye Research Institute, University of Wisconsin-Madison, Madison, Wisconsin
| | - Courtney A Schwebach
- Department of Neuroscience, School of Medical and Public Health, McPherson Eye Research Institute, University of Wisconsin-Madison, Madison, Wisconsin
| | - Xin Huang
- Department of Neuroscience, School of Medical and Public Health, McPherson Eye Research Institute, University of Wisconsin-Madison, Madison, Wisconsin
| |
Collapse
|
13
|
Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT. J Neurosci 2016; 35:16180-98. [PMID: 26658869 DOI: 10.1523/jneurosci.2175-15.2015] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
UNLABELLED Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle--less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70-100 ms, revealing a dynamic process of image segmentation.
Collapse
|
14
|
Kar K, Krekelberg B. Testing the assumptions underlying fMRI adaptation using intracortical recordings in area MT. Cortex 2016; 80:21-34. [PMID: 26856637 DOI: 10.1016/j.cortex.2015.12.011] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Revised: 11/10/2015] [Accepted: 12/14/2015] [Indexed: 11/17/2022]
Abstract
We investigated how neural activity in the middle temporal area of the macaque monkey changes after 3 sec of exposure to a visual stimulus and used this to gain insight into the assumptions underlying the fMRI adaptation method (fMRIa). We studied both changes in tuning curves following weak and strong motion stimuli (adaptation) and the differences between a first and second exposure to the same stimulus (repetition suppression). Typically, tuning curves had smaller amplitudes and narrower tuning widths after strong adaptation; this was true for single neurons, multi-unit activity (MUA), the evoked local field potential (LFP), as well as gamma band activity. Repetition typically led to reduced responses. This reduction was correlated with direction selectivity and not explained by neural fatigue. Our data, however, warn against a simplistic view of the consequences of adaptation. First, a considerable fraction of neurons and sites showed response enhancements after adaptation, especially when probed with a stimulus that moved opposite to the direction of the adapting stimulus. Second, adaptation was stimulus selective only on a time scale of ∼100 msec. Third, aggregate measures of neural activity (MUA, LFPs) had substantially different adaptation effects. Fourth, there were qualitative differences between our findings in MT and earlier findings in IT cortex. We conclude that selective adaptation effects in fMRIa are relatively easy to miss even when they exist (for instance by presenting stimuli for too long, or because neurons that enhance after adaptation cancel out the effect of neurons that suppress). Moreover, we argue that adaptation should be understood in the context of the computations that a neural circuit perform. Using fMRIa as a tool to uncover neural selectivity requires a better understanding of this circuitry and its consequences for adaptation.
Collapse
Affiliation(s)
- Kohitij Kar
- Center for Molecular and Behavioral Neuroscience, Rutgers University - Newark, USA; Behavioral and Neural Sciences Graduate Program, Rutgers University - Newark, Newark, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University - Newark, USA.
| |
Collapse
|
15
|
Joukes J, Hartmann TS, Krekelberg B. Motion detection based on recurrent network dynamics. Front Syst Neurosci 2014; 8:239. [PMID: 25565992 PMCID: PMC4274907 DOI: 10.3389/fnsys.2014.00239] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 12/01/2014] [Indexed: 11/18/2022] Open
Abstract
The detection of visual motion requires temporal delays to compare current with earlier visual input. Models of motion detection assume that these delays reside in separate classes of slow and fast thalamic cells, or slow and fast synaptic transmission. We used a data-driven modeling approach to generate a model that instead uses recurrent network dynamics with a single, fixed temporal integration window to implement the velocity computation. This model successfully reproduced the temporal response dynamics of a population of motion sensitive neurons in macaque middle temporal area (MT) and its constituent parts matched many of the properties found in the motion processing pathway (e.g., Gabor-like receptive fields (RFs), simple and complex cells, spatially asymmetric excitation and inhibition). Reverse correlation analysis revealed that a simplified network based on first and second order space-time correlations of the recurrent model behaved much like a feedforward motion energy (ME) model. The feedforward model, however, failed to capture the full speed tuning and direction selectivity properties based on higher than second order space-time correlations typically found in MT. These findings support the idea that recurrent network connectivity can create temporal delays to compute velocity. Moreover, the model explains why the motion detection system often behaves like a feedforward ME network, even though the anatomical evidence strongly suggests that this network should be dominated by recurrent feedback.
Collapse
Affiliation(s)
- Jeroen Joukes
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| | - Till S Hartmann
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University Newark, NJ, USA
| |
Collapse
|
16
|
Perry CJ, Fallah M. Feature integration and object representations along the dorsal stream visual hierarchy. Front Comput Neurosci 2014; 8:84. [PMID: 25140147 PMCID: PMC4122209 DOI: 10.3389/fncom.2014.00084] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2014] [Accepted: 07/16/2014] [Indexed: 11/13/2022] Open
Abstract
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features.
Collapse
Affiliation(s)
- Carolyn Jeane Perry
- Visual Perception and Attention Laboratory, School of Kinesiology and Health Science, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada
| | - Mazyar Fallah
- Visual Perception and Attention Laboratory, School of Kinesiology and Health Science, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; Departments of Biology and Psychology, York University Toronto, ON, Canada ; Canadian Action and Perception Network, York University Toronto, ON, Canada
| |
Collapse
|
17
|
Xiao J, Niu YQ, Wiesner S, Huang X. Normalization of neuronal responses in cortical area MT across signal strengths and motion directions. J Neurophysiol 2014; 112:1291-306. [PMID: 24899674 DOI: 10.1152/jn.00700.2013] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Multiple visual stimuli are common in natural scenes, yet it remains unclear how multiple stimuli interact to influence neuronal responses. We investigated this question by manipulating relative signal strengths of two stimuli moving simultaneously within the receptive fields (RFs) of neurons in the extrastriate middle temporal (MT) cortex. Visual stimuli were overlapping random-dot patterns moving in two directions separated by 90°. We first varied the motion coherence of each random-dot pattern and characterized, across the direction tuning curve, the relationship between neuronal responses elicited by bidirectional stimuli and by the constituent motion components. The tuning curve for bidirectional stimuli showed response normalization and can be accounted for by a weighted sum of the responses to the motion components. Allowing nonlinear, multiplicative interaction between the two component responses significantly improved the data fit for some neurons, and the interaction mainly had a suppressive effect on the neuronal response. The weighting of the component responses was not fixed but dependent on relative signal strengths. When two stimulus components moved at different coherence levels, the response weight for the higher-coherence component was significantly greater than that for the lower-coherence component. We also varied relative luminance levels of two coherently moving stimuli and found that MT response weight for the higher-luminance component was also greater. These results suggest that competition between multiple stimuli within a neuron's RF depends on relative signal strengths of the stimuli and that multiplicative nonlinearity may play an important role in shaping the response tuning for multiple stimuli.
Collapse
Affiliation(s)
- Jianbo Xiao
- Department of Neuroscience, University of Wisconsin-Madison, Madison, Wisconsin
| | - Yu-Qiong Niu
- Department of Neuroscience, University of Wisconsin-Madison, Madison, Wisconsin
| | - Steven Wiesner
- Department of Neuroscience, University of Wisconsin-Madison, Madison, Wisconsin
| | - Xin Huang
- Department of Neuroscience, University of Wisconsin-Madison, Madison, Wisconsin
| |
Collapse
|