1
|
Rosenberg A, Thompson LW, Doudlah R, Chang TY. Neuronal Representations Supporting Three-Dimensional Vision in Nonhuman Primates. Annu Rev Vis Sci 2023; 9:337-359. [PMID: 36944312 DOI: 10.1146/annurev-vision-111022-123857] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
The visual system must reconstruct the dynamic, three-dimensional (3D) world from ambiguous two-dimensional (2D) retinal images. In this review, we synthesize current literature on how the visual system of nonhuman primates performs this transformation through multiple channels within the classically defined dorsal (where) and ventral (what) pathways. Each of these channels is specialized for processing different 3D features (e.g., the shape, orientation, or motion of objects, or the larger scene structure). Despite the common goal of 3D reconstruction, neurocomputational differences between the channels impose distinct information-limiting constraints on perception. Convergent evidence further points to the little-studied area V3A as a potential branchpoint from which multiple 3D-fugal processing channels diverge. We speculate that the expansion of V3A in humans may have supported the emergence of advanced 3D spatial reasoning skills. Lastly, we discuss future directions for exploring 3D information transmission across brain areas and experimental approaches that can further advance the understanding of 3D vision.
Collapse
Affiliation(s)
- Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Raymond Doudlah
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA;
| | - Ting-Yu Chang
- School of Medicine, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
2
|
Wen P, Landy MS, Rokers B. Identifying cortical areas that underlie the transformation from 2D retinal to 3D head-centric motion signals. Neuroimage 2023; 270:119909. [PMID: 36801370 PMCID: PMC10061442 DOI: 10.1016/j.neuroimage.2023.119909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/26/2023] [Accepted: 01/28/2023] [Indexed: 02/18/2023] Open
Abstract
Accurate motion perception requires that the visual system integrate the 2D retinal motion signals received by the two eyes into a single representation of 3D motion. However, most experimental paradigms present the same stimulus to the two eyes, signaling motion limited to a 2D fronto-parallel plane. Such paradigms are unable to dissociate the representation of 3D head-centric motion signals (i.e., 3D object motion relative to the observer) from the associated 2D retinal motion signals. Here, we used stereoscopic displays to present separate motion signals to the two eyes and examined their representation in visual cortex using fMRI. Specifically, we presented random-dot motion stimuli that specified various 3D head-centric motion directions. We also presented control stimuli, which matched the motion energy of the retinal signals, but were inconsistent with any 3D motion direction. We decoded motion direction from BOLD activity using a probabilistic decoding algorithm. We found that 3D motion direction signals can be reliably decoded in three major clusters in the human visual system. Critically, in early visual cortex (V1-V3), we found no significant difference in decoding performance between stimuli specifying 3D motion directions and the control stimuli, suggesting that these areas represent the 2D retinal motion signals, rather than 3D head-centric motion itself. In voxels in and surrounding hMT and IPS0 however, decoding performance was consistently superior for stimuli that specified 3D motion directions compared to control stimuli. Our results reveal the parts of the visual processing hierarchy that are critical for the transformation of retinal into 3D head-centric motion signals and suggest a role for IPS0 in their representation, in addition to its sensitivity to 3D object structure and static depth.
Collapse
Affiliation(s)
- Puti Wen
- Psychology, New York University Abu Dhabi, United Arab Emirates.
| | - Michael S Landy
- Department of Psychology and Center for Neural Science, New York University, United States
| | - Bas Rokers
- Psychology, New York University Abu Dhabi, United Arab Emirates; Department of Psychology and Center for Neural Science, New York University, United States
| |
Collapse
|
3
|
Himmelberg MM, Segala FG, Maloney RT, Harris JM, Wade AR. Decoding Neural Responses to Motion-in-Depth Using EEG. Front Neurosci 2020; 14:581706. [PMID: 33362456 PMCID: PMC7758252 DOI: 10.3389/fnins.2020.581706] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 11/23/2020] [Indexed: 11/13/2022] Open
Abstract
Two stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single "motion-in-depth" pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving toward or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move toward and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.
Collapse
Affiliation(s)
- Marc M Himmelberg
- Department of Psychology, University of York, York, United Kingdom.,Department of Psychology, New York University, New York, NY, United States
| | | | - Ryan T Maloney
- Department of Psychology, University of York, York, United Kingdom
| | - Julie M Harris
- School of Psychology and Neuroscience, University of St. Andrews, Fife, United Kingdom
| | - Alex R Wade
- Department of Psychology, University of York, York, United Kingdom.,York Biomedical Research Institute, University of York, York, United Kingdom
| |
Collapse
|
4
|
Cue-dependent effects of VR experience on motion-in-depth sensitivity. PLoS One 2020; 15:e0229929. [PMID: 32150569 PMCID: PMC7062262 DOI: 10.1371/journal.pone.0229929] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 02/18/2020] [Indexed: 02/02/2023] Open
Abstract
The visual system exploits multiple signals, including monocular and binocular cues, to determine the motion of objects through depth. In the laboratory, sensitivity to different three-dimensional (3D) motion cues varies across observers and is often weak for binocular cues. However, laboratory assessments may reflect factors beyond inherent perceptual sensitivity. For example, the appearance of weak binocular sensitivity may relate to extensive prior experience with two-dimensional (2D) displays in which binocular cues are not informative. Here we evaluated the impact of experience on motion-in-depth (MID) sensitivity in a virtual reality (VR) environment. We tested a large cohort of observers who reported having no prior VR experience and found that binocular cue sensitivity was substantially weaker than monocular cue sensitivity. As expected, sensitivity was greater when monocular and binocular cues were presented together than in isolation. Surprisingly, the addition of motion parallax signals appeared to cause observers to rely almost exclusively on monocular cues. As observers gained experience in the VR task, sensitivity to monocular and binocular cues increased. Notably, most observers were unable to distinguish the direction of MID based on binocular cues above chance level when tested early in the experiment, whereas most showed statistically significant sensitivity to binocular cues when tested late in the experiment. This result suggests that observers may discount binocular cues when they are first encountered in a VR environment. Laboratory assessments may thus underestimate the sensitivity of inexperienced observers to MID, especially for binocular cues.
Collapse
|
5
|
Joo SJ, Greer DA, Cormack LK, Huk AC. Eye-specific pattern-motion signals support the perception of three-dimensional motion. J Vis 2019; 19:27. [PMID: 31013523 PMCID: PMC6482860 DOI: 10.1167/19.4.27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
An object moving through three-dimensional (3D) space typically yields different patterns of velocities in each eye. For an interocular velocity difference cue to be used, some instances of real 3D motion in the environment (e.g., when a moving object is partially occluded) would require an interocular velocity difference computation that operates on motion signals that are not only monocular (or eye specific) but also depend on each eye's two-dimensional (2D) direction being estimated over regions larger than the size of V1 receptive fields (i.e., global pattern motion). We investigated this possibility using 3D motion aftereffects (MAEs) with stimuli comprising many small, drifting Gabor elements. Conventional frontoparallel (2D) MAEs were local—highly sensitive to the test elements being in the same locations as the adaptor (Experiment 1). In contrast, 3D MAEs were robust to the test elements being in different retinal locations than the adaptor, indicating that 3D motion processing involves relatively global spatial pooling of motion signals (Experiment 2). The 3D MAEs were strong even when the local elements were in unmatched locations across the two eyes during adaptation, as well as when the adapting stimulus elements were randomly oriented, and specified global motion via the intersection of constraints (Experiment 3). These results bolster the notion of eye-specific computation of 2D pattern motion (involving global pooling of local, eye-specific motion signals) for the purpose of computing 3D motion, and highlight the idea that classically “late” computations such as pattern motion can be done in a manner that retains information about the eye of origin.
Collapse
Affiliation(s)
- Sung Jun Joo
- Department of Psychology, Pusan National University, Busan, Republic of Korea
| | - Devon A Greer
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA.,Department of Psychology, The University of Texas at Austin, Austin, TX, USA
| | - Alexander C Huk
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA.,Department of Psychology, The University of Texas at Austin, Austin, TX, USA.,Department of Neuroscience, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
6
|
Nityananda V, Joubier C, Tan J, Tarawneh G, Read JCA. Motion-in-depth perception and prey capture in the praying mantis Sphodromantis lineola. J Exp Biol 2019; 222:jeb.198614. [DOI: 10.1242/jeb.198614] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 05/01/2019] [Indexed: 02/06/2023]
Abstract
Perceiving motion-in-depth is essential to detecting approaching or receding objects, predators and prey. This can be achieved using several cues, including binocular stereoscopic cues such as changing disparity and interocular velocity differences, and monocular cues such as looming. While these have been studied in detail in humans, only looming responses have been well characterized in insects and we know nothing about the role of stereo cues and how they might interact with looming cues. We used our 3D insect cinema in a series of experiments to investigate the role of the stereo cues mentioned above, as well as looming, in the perception of motion-in-depth during predatory strikes by the praying mantis Sphodromantis lineola. Our results show that motion-in-depth does increase the probability of mantis strikes but only for the classic looming stimulus, an expanding luminance edge. Approach indicated by radial motion of a texture or expansion of a motion-defined edge, or by stereoscopic cues, all failed to elicit increased striking. We conclude that mantises use stereopsis to detect depth but not motion-in-depth, which is detected via looming.
Collapse
Affiliation(s)
- Vivek Nityananda
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle Upon Tyne, NE2 4HH, UK
| | - Coline Joubier
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle Upon Tyne, NE2 4HH, UK
- M2 Comportement Animal et Humain, École doctorale de Rennes, Vie Agro Santé, University of Rennes 1, Rennes 35000, France
| | - Jerry Tan
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle Upon Tyne, NE2 4HH, UK
| | - Ghaith Tarawneh
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle Upon Tyne, NE2 4HH, UK
| | - Jenny C. A. Read
- Institute of Neuroscience, Henry Wellcome Building for Neuroecology, Newcastle University, Framlington Place, Newcastle Upon Tyne, NE2 4HH, UK
| |
Collapse
|
7
|
Motion Discrimination and the Motion Aftereffect in Mouse Vision. eNeuro 2018; 5:eN-NWR-0065-18. [PMID: 30627645 PMCID: PMC6325549 DOI: 10.1523/eneuro.0065-18.2018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 11/02/2018] [Accepted: 11/15/2018] [Indexed: 11/29/2022] Open
Abstract
Prolonged exposure to motion in one direction often leads to the illusion of motion in the opposite direction for stationary objects. This motion aftereffect likely arises across several visual areas from adaptive changes in the balance of activity and competitive interactions. We examined whether or not the mouse was susceptible to this same illusion to determine whether it would be a suitable model for learning about the neural representation of the motion aftereffect. Under a classical conditioning paradigm, mice learned to lick when presented with motion in one direction and not the opposite direction. When the mice were adapted to motion preceding this test, their lick behavior for zero coherence motion was biased for motion in the opposite direction of the adapting stimulus. Overall, lick count versus motion coherence shifted in the opposite direction of the adapting stimulus. This suggests that although the mouse has a simpler visual system compared with primates, it still is subject to the motion aftereffect and may elucidate the underlying circuitry.
Collapse
|
8
|
Zhang D, Nourrit V, De Bougrenet de la Tocnaye JL. Enhancing Motion-In-Depth Perception of Random-Dot Stereograms. Perception 2018; 47:722-734. [PMID: 29914316 DOI: 10.1177/0301006618775026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Random-dot stereograms have been widely used to explore the neural mechanisms underlying binocular vision. Although they are a powerful tool to stimulate motion-in-depth (MID) perception, published results report some difficulties in the capacity to perceive MID generated by random-dot stereograms. The purpose of this study was to investigate whether the performance of MID perception could be improved using an appropriate stimulus design. Sixteen inexperienced observers participated in the experiment. A training session was carried out to improve the accuracy of MID detection before the experiment. Four aspects of stimulus design were investigated: presence of a static reference, background texture, relative disparity, and stimulus contrast. Participants' performance in MID direction discrimination was recorded and compared to evaluate whether varying these factors helped MID perception. Results showed that only the presence of background texture had a significant effect on MID direction perception. This study provides suggestions for the design of 3D stimuli in order to facilitate MID perception.
Collapse
Affiliation(s)
- Di Zhang
- School of Science, Faculty of Science and Technology, Communication University of China, Beijing, China; Optics Department, IMT Atlantique, Brest, France
| | | | | |
Collapse
|
9
|
Abstract
The visual system must recover important properties of the external environment if its host is to survive. Because the retinae are effectively two-dimensional but the world is three-dimensional (3D), the patterns of stimulation both within and across the eyes must be used to infer the distal stimulus-the environment-in all three dimensions. Moreover, animals and elements in the environment move, which means the input contains rich temporal information. Here, in addition to reviewing the literature, we discuss how and why prior work has focused on purported isolated systems (e.g., stereopsis) or cues (e.g., horizontal disparity) that do not necessarily map elegantly on to the computations and complex patterns of stimulation that arise when visual systems operate within the real world. We thus also introduce the binoptic flow field (BFF) as a description of the 3D motion information available in realistic environments, which can foster the use of ecologically valid yet well-controlled stimuli. Further, it can help clarify how future studies can more directly focus on the computations and stimulus properties the visual system might use to support perception and behavior in a dynamic 3D world.
Collapse
Affiliation(s)
| | | | - Jonas Knöll
- The University of Texas at Austin, Texas 78757;
| | | |
Collapse
|
10
|
Bonnen K, Huk AC, Cormack LK. Dynamic mechanisms of visually guided 3D motion tracking. J Neurophysiol 2017; 118:1515-1531. [PMID: 28637820 PMCID: PMC5596126 DOI: 10.1152/jn.00831.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 06/12/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
The continuous perception of motion-through-depth is critical for both navigation and interacting with objects in a dynamic three-dimensional (3D) world. Here we used 3D tracking to simultaneously assess the perception of motion in all directions, facilitating comparisons of responses to motion-through-depth to frontoparallel motion. Observers manually tracked a stereoscopic target as it moved in a 3D Brownian random walk. We found that continuous tracking of motion-through-depth was selectively impaired, showing different spatiotemporal properties compared with frontoparallel motion tracking. Two separate factors were found to contribute to this selective impairment. The first is the geometric constraint that motion-through-depth yields much smaller retinal projections than frontoparallel motion, given the same object speed in the 3D environment. The second factor is the sluggish nature of disparity processing, which is present even for frontoparallel motion tracking of a disparity-defined stimulus. Thus, despite the ecological importance of reacting to approaching objects, both the geometry of 3D vision and the nature of disparity processing result in considerable impairments for tracking motion-through-depth using binocular cues.NEW & NOTEWORTHY We characterize motion perception continuously in all directions using an ecologically relevant, manual target tracking paradigm we recently developed. This approach reveals a selective impairment to the perception of motion-through-depth. Geometric considerations demonstrate that this impairment is not consistent with previously observed spatial deficits (e.g., stereomotion suppression). However, results from an examination of disparity processing are consistent with the longer latencies observed in discrete, trial-based measurements of the perception of motion-through-depth.
Collapse
Affiliation(s)
- Kathryn Bonnen
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas;
- Institute for Neuroscience, University of Texas at Austin, Austin, Texas; and
- Department of Neuroscience, University of Texas at Austin, Austin, Texas
| | - Alexander C Huk
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas
- Department of Psychology, University of Texas at Austin, Austin, Texas
- Institute for Neuroscience, University of Texas at Austin, Austin, Texas; and
- Department of Neuroscience, University of Texas at Austin, Austin, Texas
| | - Lawrence K Cormack
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas
- Department of Psychology, University of Texas at Austin, Austin, Texas
- Institute for Neuroscience, University of Texas at Austin, Austin, Texas; and
| |
Collapse
|
11
|
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals. J Neurosci 2017; 36:10791-10802. [PMID: 27798134 DOI: 10.1523/jneurosci.1298-16.2016] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 08/26/2016] [Indexed: 11/21/2022] Open
Abstract
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms.
Collapse
|
12
|
Abstract
We use visual information to determine our dynamic relationship with other objects in a three-dimensional (3D) world. Despite decades of work on visual motion processing, it remains unclear how 3D directions-trajectories that include motion toward or away from the observer-are represented and processed in visual cortex. Area MT is heavily implicated in processing visual motion and depth, yet previous work has found little evidence for 3D direction sensitivity per se. Here we use a rich ensemble of binocular motion stimuli to reveal that most neurons in area MT of the anesthetized macaque encode 3D motion information. This tuning for 3D motion arises from multiple mechanisms, including different motion preferences in the two eyes and a nonlinear interaction of these signals when both eyes are stimulated. Using a novel method for functional binocular alignment, we were able to rule out contributions of static disparity tuning to the 3D motion tuning we observed. We propose that a primary function of MT is to encode 3D motion, critical for judging the movement of objects in dynamic real-world environments.
Collapse
|
13
|
Abstract
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion.
Collapse
|
14
|
Peng Q, Shi BE. Neural population models for perception of motion in depth. Vision Res 2014; 101:11-31. [DOI: 10.1016/j.visres.2014.04.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2013] [Revised: 04/28/2014] [Accepted: 04/29/2014] [Indexed: 11/27/2022]
|
15
|
Abstract
This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.
Collapse
Affiliation(s)
- Alexander C Huk
- Center for Perceptual Systems, Neurobiology, Psychology, The University of Texas at Austin, TX 78712, United States.
| |
Collapse
|
16
|
Czuba TB, Rokers B, Huk AC, Cormack LK. To CD or not to CD: Is there a 3D motion aftereffect based on changing disparities? J Vis 2012; 12:7. [PMID: 22508954 DOI: 10.1167/12.4.7] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recently, T. B. Czuba, B. Rokers, K. Guillet, A. C. Huk, and L. K. Cormack, (2011) and Y. Sakano, R. S. Allison, and I. P. Howard (2012) published very similar studies using the motion aftereffect to probe the way in which motion through depth is computed. Here, we compare and contrast the findings of these two studies and incorporate their results with a brief follow-up experiment. Taken together, the results leave no doubt that the human visual system incorporates a mechanism that is uniquely sensitive to the difference in velocity signals between the two eyes, but--perhaps surprisingly--evidence for a neural representation of changes in binocular disparity over time remains elusive.
Collapse
Affiliation(s)
- Thaddeus B Czuba
- Center for Perceptual Systems, Department of Psychology, The University of Texas at Austin, USA.
| | | | | | | |
Collapse
|