1
|
Maruya A, Zaidi Q. Anisotropy of object nonrigidity: High-level perceptual consequences of cortical anisotropy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.10.612333. [PMID: 39345500 PMCID: PMC11429613 DOI: 10.1101/2024.09.10.612333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
We present a surprising anisotropy in perceived object nonrigidity, a complex, higher-level perceptual phenomenon, and explain it unexpectedly by the distribution of low-level neural properties in primary visual cortex. We examined the visual interpretation of two rigidly connected rotating circular rings. At speeds where observers predominantly perceived rigid rotation of the rings rotating horizontally, observers perceived only nonrigid wobbling when the image was rotated by 90°. Additionally, vertically rotating rings appeared narrower and longer compared to their physically identical horizontally rotating counterparts. We show that these perceived shape changes can be decoded from V1 outputs by incorporating documented anisotropies in orientation selectivity. We then show that even when the shapes are matched, the increased nonrigidity persists in vertical rotations, suggesting that motion mechanisms also play a role. By incorporating cortical anisotropies into optic flow computations, we show that the kinematic gradients (Divergence, Curl, Deformation) for vertical rotations align more with physical nonrigidity, while those for horizontal rotations align closer to rigidity, indicating that cortical anisotropies contribute to the orientation dependence of the perception of nonrigidity. Our results reveal how high-level percepts are shaped by low-level anisotropies, which raises questions about their evolutionary significance, particularly regarding shape constancy and motion perception.
Collapse
|
2
|
Vaziri PA, McDougle SD, Clark DA. Humans use local spectrotemporal correlations to detect rising and falling pitch. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.03.606481. [PMID: 39131316 PMCID: PMC11312537 DOI: 10.1101/2024.08.03.606481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
To discern speech or appreciate music, the human auditory system detects how pitch increases or decreases over time. However, the algorithms used to detect changes in pitch, or pitch motion, are incompletely understood. Here, using psychophysics, computational modeling, functional neuroimaging, and analysis of recorded speech, we ask if humans detect pitch motion using computations analogous to those used by the visual system. We adapted stimuli from studies of vision to create novel auditory correlated noise stimuli that elicited robust pitch motion percepts. Crucially, these stimuli possess no persistent features across frequency or time, but do possess positive or negative local spectrotemporal correlations in intensity. In psychophysical experiments, we found clear evidence that humans judge pitch direction based on both positive and negative spectrotemporal correlations. The observed sensitivity to negative correlations is a direct analogue of illusory "reverse-phi" motion in vision, and thus constitutes a new auditory illusion. Our behavioral results and computational modeling led us to hypothesize that human auditory processing employs pitch direction opponency. fMRI measurements in auditory cortex supported this hypothesis. To link our psychophysical findings to real-world pitch perception, we analyzed recordings of English and Mandarin speech and discovered that pitch direction was robustly signaled by the same positive and negative spectrotemporal correlations used in our psychophysical tests, suggesting that sensitivity to both positive and negative correlations confers ecological benefits. Overall, this work reveals that motion detection algorithms sensitive to local correlations are deployed by the central nervous system across disparate modalities (vision and audition) and dimensions (space and frequency).
Collapse
Affiliation(s)
| | - Samuel D. McDougle
- Dept of Psychology, Yale University, New Haven, CT 06511
- Wu Tsai Institute, Yale University, New Haven, CT 06511
| | - Damon A. Clark
- Wu Tsai Institute, Yale University, New Haven, CT 06511
- Dept of Molecular Cellular and Developmental Biology, Yale University, New Haven, CT 06511
- Dept of Physics, Yale University, New Haven, CT 06511
- Dept of Neuroscience, Yale University, New Haven, CT 06511
- Quantitative Biology Institute, Yale University, New Haven, CT 06511
| |
Collapse
|
3
|
Zhang T, Ying H, Wang H, Zhao F, Pan Q, Zhan Q, Zhang F, An Q, Liu T, Hu Y, Zhang Y. Visual motion sensitivity as an indicator of diabetic retinopathy in type 2 diabetes mellitus. Front Neurosci 2024; 18:1412241. [PMID: 39156633 PMCID: PMC11327050 DOI: 10.3389/fnins.2024.1412241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 07/24/2024] [Indexed: 08/20/2024] Open
Abstract
Objectives This current study is based on a set of visual motion sensitivity tests, investigating the correlation between visual motion sensitivity and diabetic retinopathy (DR) in type 2 diabetes mellitus (T2DM), thereby furnishing a scientific rationale for preventing and controlling DR. Methods This research was conducted by a combination of questionnaire collection and on-site investigation that involved 542 T2DM recruited from a community. The visual motion sensitivity determined the visual motion perception of the participants across three spatial frequencies (low, medium, and high) for both the first- and second-order contrast. The logistic regression model was adopted to investigate the relationship between visual motion sensitivity and DR prevalence. Besides, the Pearson correlation analysis was used to analyze the factors influencing visual motion sensitivity and restricted cubic spline (RCS) functions to assess the dose-response relationship between visual motion sensitivity and glycated hemoglobin. Results Among 542 subjects, there are 162 cases of DR, with a prevalence rate of 29.89%. After adjusting factors of age, gender, glycated hemoglobin, duration of diabetes, BMI, and hypertension, we found that the decline in first- and second-order high spatial frequency sensitivity increased the risk for DR [odds ratio (OR): 1.519 (1.065, 2.168), 1.249 (1.068, 1.460)]. The decline in perceptual ability of second-order low, medium, and high spatial frequency sensitivity is a risk factor for moderate to severe DR [OR: 1.556 (1.116, 2.168), 1.388 (1.066, 1.806), 1.476 (1.139, 1.912)]. The first-order and the second-order high spatial frequency sensitivity are significantly positively correlated with glycated hemoglobin (r = 0.105, p = 0.015 and r = 0.119, p = 0.005, respectively). Conclusion Visual motion sensitivity especially for the second-order high spatial frequency stimuli emerges as a significant predictor of DR in T2DM, offering a sensitive diagnostic tool for early detection.
Collapse
Affiliation(s)
- Tianlin Zhang
- School of Public Health, The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, China
| | - Haojiang Ying
- Department of Psychology, Soochow University, Suzhou, China
| | - Huiqun Wang
- School of Public Health, The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, China
| | - Fouxi Zhao
- Guizhou Center for Disease Control and Prevention, Guiyang, China
| | - Qiying Pan
- School of Public Health, The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, China
| | - Qingqing Zhan
- School of Public Health, The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, China
| | - Fuyan Zhang
- School of Public Health, The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, China
| | - Qinyu An
- Medical College, Guizhou University, Guiyang, China
| | - Tao Liu
- School of Public Health, The Key Laboratory of Environmental Pollution Monitoring and Disease Control, Ministry of Education, Guizhou Medical University, Guiyang, China
- Guizhou Center for Disease Control and Prevention, Guiyang, China
| | - Yuandong Hu
- Guizhou Center for Disease Control and Prevention, Guiyang, China
| | - Yang Zhang
- Department of Psychology, Soochow University, Suzhou, China
| |
Collapse
|
4
|
Llamas-Cornejo I, Peterzell DH, Serrano-Pedraza I. Temporal mechanisms in frontoparallel stereomotion revealed by individual differences analysis. Eur J Neurosci 2024; 59:3117-3133. [PMID: 38622053 DOI: 10.1111/ejn.16342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Masking experiments, using vertical and horizontal sinusoidal depth corrugations, have suggested the existence of more than two spatial-frequency disparity mechanisms. This result was confirmed through an individual differences approach. Here, using factor analytic techniques, we want to investigate the existence of independent temporal mechanisms in frontoparallel stereoscopic (cyclopean) motion. To construct stereomotion, we used sinusoidal depth corrugations obtained with dynamic random-dot stereograms. Thus, no luminance motion was present monocularly. We measured disparity thresholds for drifting vertical (up-down) and horizontal (left-right) sinusoidal corrugations of 0.4 cyc/deg at 0.25, 0.5, 1, 2, 4, 6, and 8 Hz. In total, we tested 34 participants. Results showed a small orientation anisotropy with lower thresholds for horizontal corrugations. Disparity thresholds as a function of temporal frequency were almost constant from 0.25 up to 1 Hz, and then they increased monotonically. Principal component analysis uncovered two significant factors for vertical and two for horizontal corrugations. Varimax rotation showed that one factor loaded from 0.25 to 1-2 Hz and a second factor from 2 to 4 to 8 Hz. Direct Oblimin rotation indicated a moderate intercorrelation of both factors. Our results suggest the possible existence of two somewhat interdependent temporal mechanisms involved in frontoparallel stereomotion.
Collapse
Affiliation(s)
- Ichasus Llamas-Cornejo
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Campus de Somosaguas, Madrid, Spain
| | - David H Peterzell
- Fielding Graduate University, Santa Barbara, California, and National University (JFK), Pleasant Hill, California, USA
| | - Ignacio Serrano-Pedraza
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Campus de Somosaguas, Madrid, Spain
| |
Collapse
|
5
|
Maruya A, Zaidi Q. Perceptual transitions between object rigidity and non-rigidity: Competition and cooperation among motion energy, feature tracking, and shape-based priors. J Vis 2024; 24:3. [PMID: 38306112 PMCID: PMC10848565 DOI: 10.1167/jov.24.2.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 12/20/2023] [Indexed: 02/03/2024] Open
Abstract
Why do moving objects appear rigid when projected retinal images are deformed non-rigidly? We used rotating rigid objects that can appear rigid or non-rigid to test whether shape features contribute to rigidity perception. When two circular rings were rigidly linked at an angle and jointly rotated at moderate speeds, observers reported that the rings wobbled and were not linked rigidly, but rigid rotation was reported at slow speeds. When gaps, paint, or vertices were added, the rings appeared rigidly rotating even at moderate speeds. At high speeds, all configurations appeared non-rigid. Salient features thus contribute to rigidity at slow and moderate speeds but not at high speeds. Simulated responses of arrays of motion-energy cells showed that motion flow vectors are predominantly orthogonal to the contours of the rings, not parallel to the rotation direction. A convolutional neural network trained to distinguish flow patterns for wobbling versus rotation gave a high probability of wobbling for the motion-energy flows. However, the convolutional neural network gave high probabilities of rotation for motion flows generated by tracking features with arrays of MT pattern-motion cells and corner detectors. In addition, circular rings can appear to spin and roll despite the absence of any sensory evidence, and this illusion is prevented by vertices, gaps, and painted segments, showing the effects of rotational symmetry and shape. Combining convolutional neural network outputs that give greater weight to motion energy at fast speeds and to feature tracking at slow speeds, with the shape-based priors for wobbling and rolling, explained rigid and non-rigid percepts across shapes and speeds (R2 = 0.95). The results demonstrate how cooperation and competition between different neuronal classes lead to specific states of visual perception and to transitions between the states.
Collapse
Affiliation(s)
- Akihito Maruya
- Graduate Center for Vision Research, State University of New York, New York, NY, USA
| | - Qasim Zaidi
- Graduate Center for Vision Research, State University of New York, New York, NY, USA
| |
Collapse
|
6
|
Liesefeld HR, Lamy D, Gaspelin N, Geng JJ, Kerzel D, Schall JD, Allen HA, Anderson BA, Boettcher S, Busch NA, Carlisle NB, Colonius H, Draschkow D, Egeth H, Leber AB, Müller HJ, Röer JP, Schubö A, Slagter HA, Theeuwes J, Wolfe J. Terms of debate: Consensus definitions to guide the scientific discourse on visual distraction. Atten Percept Psychophys 2024:10.3758/s13414-023-02820-3. [PMID: 38177944 DOI: 10.3758/s13414-023-02820-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2023] [Indexed: 01/06/2024]
Abstract
Hypothesis-driven research rests on clearly articulated scientific theories. The building blocks for communicating these theories are scientific terms. Obviously, communication - and thus, scientific progress - is hampered if the meaning of these terms varies idiosyncratically across (sub)fields and even across individual researchers within the same subfield. We have formed an international group of experts representing various theoretical stances with the goal to homogenize the use of the terms that are most relevant to fundamental research on visual distraction in visual search. Our discussions revealed striking heterogeneity and we had to invest much time and effort to increase our mutual understanding of each other's use of central terms, which turned out to be strongly related to our respective theoretical positions. We present the outcomes of these discussions in a glossary and provide some context in several essays. Specifically, we explicate how central terms are used in the distraction literature and consensually sharpen their definitions in order to enable communication across theoretical standpoints. Where applicable, we also explain how the respective constructs can be measured. We believe that this novel type of adversarial collaboration can serve as a model for other fields of psychological research that strive to build a solid groundwork for theorizing and communicating by establishing a common language. For the field of visual distraction, the present paper should facilitate communication across theoretical standpoints and may serve as an introduction and reference text for newcomers.
Collapse
Affiliation(s)
- Heinrich R Liesefeld
- Department of Psychology, University of Bremen, Hochschulring 18, D-28359, Bremen, Germany.
| | - Dominique Lamy
- The School of Psychology Sciences and The Sagol School of Neuroscience, Tel Aviv University, Ramat Aviv 69978, POB 39040, Tel Aviv, Israel.
| | | | - Joy J Geng
- University of California Davis, Daivs, CA, USA
| | | | | | | | | | | | | | | | - Hans Colonius
- Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | | | | | | | | | | | - Anna Schubö
- Philipps University Marburg, Marburg, Germany
| | | | | | - Jeremy Wolfe
- Harvard Medical School, Boston, MA, USA
- Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
7
|
Yang YH, Fukiage T, Sun Z, Nishida S. Psychophysical measurement of perceived motion flow of naturalistic scenes. iScience 2023; 26:108307. [PMID: 38025782 PMCID: PMC10679809 DOI: 10.1016/j.isci.2023.108307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 08/09/2023] [Accepted: 10/20/2023] [Indexed: 12/01/2023] Open
Abstract
The neural and computational mechanisms underlying visual motion perception have been extensively investigated over several decades, but little attempt has been made to measure and analyze, how human observers perceive the map of motion vectors, or optical flow, in complex naturalistic scenes. Here, we developed a psychophysical method to assess human-perceived motion flows using local vector matching and a flash probe. The estimated perceived flow for naturalistic movies agreed with the physically correct flow (ground truth) at many points, but also showed consistent deviations from the ground truth (flow illusions) at other points. Comparisons with the predictions of various computational models, including cutting-edge computer vision algorithms and coordinate transformation models, indicated that some flow illusions are attributable to lower-level factors such as spatiotemporal pooling and signal loss, while others reflect higher-level computations, including vector decomposition. Our study demonstrates a promising data-driven psychophysical paradigm for an advanced understanding of visual motion perception.
Collapse
Affiliation(s)
- Yung-Hao Yang
- Cognitive Informatics Laboratory, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Taiki Fukiage
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1, Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan
| | - Zitang Sun
- Cognitive Informatics Laboratory, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Shin’ya Nishida
- Cognitive Informatics Laboratory, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1, Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan
| |
Collapse
|
8
|
Maruya A, Zaidi Q. Perceptual Transitions between Object Rigidity & Non-rigidity: Competition and cooperation between motion-energy, feature-tracking and shape-based priors. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.07.536067. [PMID: 37503257 PMCID: PMC10369874 DOI: 10.1101/2023.04.07.536067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Why do moving objects appear rigid when projected retinal images are deformed non-rigidly? We used rotating rigid objects that can appear rigid or non-rigid to test whether shape features contribute to rigidity perception. When two circular rings were rigidly linked at an angle and jointly rotated at moderate speeds, observers reported that the rings wobbled and were not linked rigidly but rigid rotation was reported at slow speeds. When gaps, paint or vertices were added, the rings appeared rigidly rotating even at moderate speeds. At high speeds, all configurations appeared non-rigid. Salient features thus contribute to rigidity at slow and moderate speeds, but not at high speeds. Simulated responses of arrays of motion-energy cells showed that motion flow vectors are predominantly orthogonal to the contours of the rings, not parallel to the rotation direction. A convolutional neural network trained to distinguish flow patterns for wobbling versus rotation, gave a high probability of wobbling for the motion-energy flows. However, the CNN gave high probabilities of rotation for motion flows generated by tracking features with arrays of MT pattern-motion cells and corner detectors. In addition, circular rings can appear to spin and roll despite the absence of any sensory evidence, and this illusion is prevented by vertices, gaps, and painted segments, showing the effects of rotational symmetry and shape. Combining CNN outputs that give greater weight to motion energy at fast speeds and to feature tracking at slow, with the shape-based priors for wobbling and rolling, explained rigid and nonrigid percepts across shapes and speeds (R2=0.95). The results demonstrate how cooperation and competition between different neuronal classes leads to specific states of visual perception and to transitions between the states.
Collapse
Affiliation(s)
- Akihito Maruya
- Graduate Center for Vision Research, State University of New York, 33 West 42nd St, New York, NY 10036
| | - Qasim Zaidi
- Graduate Center for Vision Research, State University of New York, 33 West 42nd St, New York, NY 10036
| |
Collapse
|
9
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
10
|
Nakada H, Murakami I. Local motion signals silence the perceptual solution of global apparent motion. J Vis 2023; 23:12. [PMID: 37378990 DOI: 10.1167/jov.23.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2023] Open
Abstract
Stimuli for apparent motion can have ambiguity in frame-to-frame correspondences among visual elements. This occurs when visual inputs cause a correspondence problem that allows multiple alternatives of perceptual solutions. Herein we examined the influence of local visual motions on a perceptual solution under such a multistable situation. We repeatedly alternated two frames of stimuli in a circular configuration in which discrete elements in two different colors alternated in space and switched their colors frame by frame. These stimuli were compatible with three perceptual solutions: globally consistent clockwise and counterclockwise rotations and color flickers at the same locations without such global apparent motion. We added a sinusoidal grating continuously drifting within each element to examine whether the perceptual solution for the global apparent motion was affected by the local continuous motions. We found that the local motions suppressed global apparent motion and promoted another perceptual solution that the local elements were only flickering between the two colors and drifting within static windows. It was concluded that local continuous motions as counterevidence against global apparent motion contributed to individuating visual objects and integrating visual features for maintaining object identity at the same location.
Collapse
Affiliation(s)
- Hoko Nakada
- Department of Psychology, The University of Tokyo, Tokyo, Japan
| | - Ikuya Murakami
- Department of Psychology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
11
|
Bigelow A, Kim T, Namima T, Bair W, Pasupathy A. Dissociation in neuronal encoding of object versus surface motion in the primate brain. Curr Biol 2023; 33:711-719.e5. [PMID: 36738735 PMCID: PMC9992021 DOI: 10.1016/j.cub.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/30/2022] [Accepted: 01/09/2023] [Indexed: 02/05/2023]
Abstract
A paradox exists in our understanding of motion processing in the primate visual system: neurons in the dorsal motion processing stream often strikingly fail to encode long-range and perceptually salient jumps of a moving stimulus. Psychophysical studies suggest that such long-range motion, which requires integration over more distant parts of the visual field, may be based on higher-order motion processing mechanisms that rely on feature or object tracking. Here, we demonstrate that ventral visual area V4, long recognized as critical for processing static scenes, includes neurons that maintain direction selectivity for long-range motion, even when conflicting local motion is present. These V4 neurons exhibit specific selectivity for the motion of objects, i.e., targets with defined boundaries, rather than the motion of surfaces behind apertures, and are selective for direction of motion over a broad range of spatial displacements and defined by a variety of features. Motion direction at a range of speeds can be accurately decoded on single trials from the activity of just a few V4 neurons. Thus, our results identify a novel motion computation in the ventral stream that is strikingly different from, and complementary to, the well-established system in the dorsal stream, and they support the hypothesis that the ventral stream system interacts with the dorsal stream to achieve the higher level of abstraction critical for tracking dynamic objects.
Collapse
Affiliation(s)
- Anthony Bigelow
- Graduate Program in Neuroscience, University of Washington, Seattle, WA 98195, USA; Department of Biological Structure and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Taekjun Kim
- Department of Biological Structure and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Tomoyuki Namima
- Department of Biological Structure and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Wyeth Bair
- Department of Biological Structure and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA
| | - Anitha Pasupathy
- Department of Biological Structure and Washington National Primate Research Center, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
12
|
Rauchman SH, Zubair A, Jacob B, Rauchman D, Pinkhasov A, Placantonakis DG, Reiss AB. Traumatic brain injury: Mechanisms, manifestations, and visual sequelae. Front Neurosci 2023; 17:1090672. [PMID: 36908792 PMCID: PMC9995859 DOI: 10.3389/fnins.2023.1090672] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Accepted: 02/06/2023] [Indexed: 02/25/2023] Open
Abstract
Traumatic brain injury (TBI) results when external physical forces impact the head with sufficient intensity to cause damage to the brain. TBI can be mild, moderate, or severe and may have long-term consequences including visual difficulties, cognitive deficits, headache, pain, sleep disturbances, and post-traumatic epilepsy. Disruption of the normal functioning of the brain leads to a cascade of effects with molecular and anatomical changes, persistent neuronal hyperexcitation, neuroinflammation, and neuronal loss. Destructive processes that occur at the cellular and molecular level lead to inflammation, oxidative stress, calcium dysregulation, and apoptosis. Vascular damage, ischemia and loss of blood brain barrier integrity contribute to destruction of brain tissue. This review focuses on the cellular damage incited during TBI and the frequently life-altering lasting effects of this destruction on vision, cognition, balance, and sleep. The wide range of visual complaints associated with TBI are addressed and repair processes where there is potential for intervention and neuronal preservation are highlighted.
Collapse
Affiliation(s)
| | - Aarij Zubair
- NYU Long Island School of Medicine, Mineola, NY, United States
| | - Benna Jacob
- NYU Long Island School of Medicine, Mineola, NY, United States
| | - Danielle Rauchman
- Department of Neuroscience, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Aaron Pinkhasov
- NYU Long Island School of Medicine, Mineola, NY, United States
| | | | - Allison B Reiss
- NYU Long Island School of Medicine, Mineola, NY, United States
| |
Collapse
|
13
|
He D, Öğmen H. A neural model for vector decomposition and relative-motion perception. Vision Res 2023; 202:108142. [PMID: 36423519 DOI: 10.1016/j.visres.2022.108142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 09/22/2022] [Accepted: 10/27/2022] [Indexed: 11/22/2022]
Abstract
The perception of motion not only depends on the detection of motion signals but also on choosing and applying reference-frames according to which motion is interpreted. Here we propose a neural model that implements the common-fate principle for reference-frame selection. The model starts with a retinotopic layer of directionally-tuned motion detectors. The Gestalt common-fate principle is applied to the activities of these detectors to implement in two neural populations the direction and the magnitude (speed) of the reference-frame. The output activities of retinotopic motion-detectors are decomposed using the direction of the reference-frame. The direction and magnitude of the reference-frame are then applied to these decomposed motion-vectors to generate activities that reflect relative-motion perception, i.e., the perception of motion with respect to the prevailing reference-frame. We simulated this model for classical relative motion stimuli, viz., the three-dot, rotating-wheel, and point-walker (biological motion) paradigms and found the model performance to be close to theoretical vector decomposition values. In the three-dot paradigm, the model made the prediction of perceived curved-trajectories for the target dot when its horizontal velocity was slower or faster than the flanking dots. We tested this prediction in two psychophysical experiments and found a good qualitative and quantitative agreement between the model and the data. Our results show that a simple neural network using solely motion information can account for the perception of group and relative motion.
Collapse
Affiliation(s)
- Dongcheng He
- Laboratory of Perceptual and Cognitive Dynamics, University of Denver, Denver, CO, USA; Department of Electrical & Computer Engineering, University of Denver, Denver, CO, USA; Ritchie School of Engineering & Computer Science, University of Denver, Denver, CO, USA
| | - Haluk Öğmen
- Laboratory of Perceptual and Cognitive Dynamics, University of Denver, Denver, CO, USA; Department of Electrical & Computer Engineering, University of Denver, Denver, CO, USA; Ritchie School of Engineering & Computer Science, University of Denver, Denver, CO, USA.
| |
Collapse
|
14
|
Waz S, Liu Z. Evidence for strictly monocular processing in visual motion opponency and Glass pattern perception. Vision Res 2021; 186:103-111. [PMID: 34082396 DOI: 10.1016/j.visres.2021.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 03/11/2021] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
When presented with locally paired dots moving in opposite directions, motion selective neurons in the middle temporal cortex (MT) reduce firing while neurons in V1 are unaffected. This physiological effect is known as motion opponency. The current study used psychophysics to investigate the neural circuit underlying motion opponency. We asked whether opposing motion signals could arrive from different eyes into the receptive field of a binocular neuron while still maintaining motion opponency. We took advantage of prior findings that orientation discrimination of the motion axis (along which paired dots oscillate) is harder when dots move counter-phase than in-phase, an effect associated with motion opponency. We found that such an effect disappeared when paired dots originated from different eyes. This suggests that motion opponency, at some point, involves strictly monocular processing. This does not mean that motion opponency is entirely monocular. Further, we found that the effect of a Glass pattern disappeared under similar viewing conditions, suggesting that Glass pattern perception also involves some strictly monocular processing.
Collapse
Affiliation(s)
- Sebastian Waz
- Department of Cognitive Sciences, University of California Irvine, Irvine, CA 92697, USA; Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA.
| | - Zili Liu
- Department of Psychology, University of California Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
15
|
Mishra S, Maganti N, Squires N, Bomdica P, Nigam D, Shapiro A, Gill MK, Lyon AT, Mirza RG. Contrast Sensitivity Testing in Retinal Vein Occlusion Using a Novel Stimulus. Transl Vis Sci Technol 2020; 9:29. [PMID: 33173608 PMCID: PMC7594580 DOI: 10.1167/tvst.9.11.29] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 10/04/2020] [Indexed: 11/30/2022] Open
Abstract
Purpose This study evaluated a novel tool known as the motion diamond stimulus (MDS), which utilizes contrast-generated illusory motion in dynamic test regions to determine contrast sensitivity (CS). Methods Patients with treated unilateral retinal vein occlusions (RVOs) underwent three assessments: the MDS, the Pelli-Robson (PR), and the National Eye Institute's Visual Function Questionnaire (VFQ-25). The MDS assessment produced two data end points, α and β. The α value represents the overall contrast threshold level and the β value serves to quantify the adaptability of the visual contrast system. The CS parameters from the MDS and log CS PR output values were used to compare RVO eyes (n = 20) to control eyes (n = 20). Results The study participants had a mean composite VFQ-25 score of 89.5 ± 10.4 on the VFQ-25. A significant difference was observed between the RVO eyes and the control eyes in PR log CS scores (P value = 0.0001) and in MDS α value (P value = 0.01). No difference in MDS β value was found between the study groups (P value = 0.39). Conclusions The results for the MDS assessment's α parameter corroborated the PR scores, suggesting contrast sensitivity threshold impairment in patients with RVO. No significant difference in β value was observed, suggesting that adaptability of the visual system is maintained in treated RVO eyes. Translational Relevance Currently, visual complaints cannot be entirely identified by Snellen visual acuity alone. The MDS offers potentially a more complete look at visual function, by including contrast sensitivity and may be able to quantify changes otherwise overlooked in retinal disease progression.
Collapse
Affiliation(s)
- Shubhendu Mishra
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| | - Nenita Maganti
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| | - Natalie Squires
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| | - Prithvi Bomdica
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| | | | | | - Manjot K. Gill
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| | - Alice T. Lyon
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| | - Rukhsana G. Mirza
- Department of Ophthalmology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
16
|
Neural responses to apparent motion can be predicted by responses to non-moving stimuli. Neuroimage 2020; 218:116973. [PMID: 32464291 PMCID: PMC7422841 DOI: 10.1016/j.neuroimage.2020.116973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 04/28/2020] [Accepted: 05/17/2020] [Indexed: 12/04/2022] Open
Abstract
When two objects are presented in alternation at two locations, they are seen as a single object moving from one location to the other. This apparent motion (AM) percept is experienced for objects located at short and also at long distances. However, current models cannot explain how the brain integrates information over large distances to create such long-range AM. This study investigates the neural markers of AM by parcelling out the contribution of spatial and temporal interactions not specific to motion. In two experiments, participants’ EEG was recorded while they viewed two stimuli inducing AM. Different combinations of these stimuli were also shown in a static context to predict an AM neural response where no motion is perceived. We compared the goodness of fit between these different predictions and found consistent results in both experiments. At short-range, the addition of the inhibitory spatial and temporal interactions not specific to motion improved the AM prediction. However, there was no indication that spatial or temporal non-linear interactions were present at long-range. This suggests that short- and long-range AM rely on different neural mechanisms. Importantly, our results also show that at both short- and long-range, responses generated by a moving stimulus could be well predicted from conditions in which no motion is perceived. That is, the EEG response to a moving stimulus is simply a combination of individual responses to non-moving stimuli. This demonstrates a dissociation between the brain response and the subjective percept of motion. EEG responses are inhibited by spatial and temporal stimulus interactions. These interactions are important for motion at short but not at long distances. We find no trace of a specific neural signature of motion perception. Neural responses to motion are well predicted by responses to non-moving stimuli.
Collapse
|
17
|
Yildizoglu T, Riegler C, Fitzgerald JE, Portugues R. A Neural Representation of Naturalistic Motion-Guided Behavior in the Zebrafish Brain. Curr Biol 2020; 30:2321-2333.e6. [PMID: 32386533 DOI: 10.1016/j.cub.2020.04.043] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/13/2020] [Accepted: 04/20/2020] [Indexed: 11/20/2022]
Abstract
All animals must transform ambiguous sensory data into successful behavior. This requires sensory representations that accurately reflect the statistics of natural stimuli and behavior. Multiple studies show that visual motion processing is tuned for accuracy under naturalistic conditions, but the sensorimotor circuits extracting these cues and implementing motion-guided behavior remain unclear. Here we show that the larval zebrafish retina extracts a diversity of naturalistic motion cues, and the retinorecipient pretectum organizes these cues around the elements of behavior. We find that higher-order motion stimuli, gliders, induce optomotor behavior matching expectations from natural scene analyses. We then image activity of retinal ganglion cell terminals and pretectal neurons. The retina exhibits direction-selective responses across glider stimuli, and anatomically clustered pretectal neurons respond with magnitudes matching behavior. Peripheral computations thus reflect natural input statistics, whereas central brain activity precisely codes information needed for behavior. This general principle could organize sensorimotor transformations across animal species.
Collapse
Affiliation(s)
- Tugce Yildizoglu
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany
| | - Clemens Riegler
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Neurobiology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Ruben Portugues
- Max Planck Institute of Neurobiology, Research Group of Sensorimotor Control, Martinsried 82152, Germany; Institute of Neuroscience, Technical University of Munich, Munich 80802, Germany; Munich Cluster for Systems Neurology (SyNergy), Munich 80802, Germany.
| |
Collapse
|
18
|
Goettker A, Braun DI, Gegenfurtner KR. Dynamic combination of position and motion information when tracking moving targets. J Vis 2020; 19:2. [PMID: 31287856 DOI: 10.1167/19.7.2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
To accurately foveate a moving target, the oculomotor system needs to estimate the position of the target at the saccade end, based on information about its position and ongoing movement, while accounting for neuronal delays and execution time of the saccade. We investigated human interceptive saccades and pursuit responses to moving targets defined by high and low luminance contrast or by chromatic contrast only (isoluminance). We used step-ramps with perpendicular directions between vertical target steps of 10 deg/s and horizontal ramps of 2.5 to 20 deg/s to separate errors with respect to the position step of the target in the vertical dimension, and errors related to target motion in the horizontal dimension. Interceptive saccades to targets of high and low luminance contrast landed close to the actual target positions, suggesting relatively accurate estimates of the amount of target displacement. Interceptive saccades to isoluminant targets were less accurate. They landed at positions the target had on average 100 ms before saccade onset. One account of this finding is that the integration of target motion is compromised for isoluminant targets moving in the periphery. In this case, the oculomotor system can use an accurate, but delayed position component, but cannot account for target movement. This deficit was also present for the postsaccadic pursuit speed. For the two luminance conditions, pursuit direction and speed were adjusted depending on the saccadic landing position. The rapid postsaccadic pursuit adjustments suggest shared position- and motion-related signals of target and eye for saccade and pursuit control.
Collapse
|
19
|
Kanaya HK. Examination of Lower Level Motion Mechanisms That Provide Information to Object Tracking: An Examination Using Dichoptic Stimulation. Iperception 2019; 10:2041669519891745. [PMID: 31832128 PMCID: PMC6891108 DOI: 10.1177/2041669519891745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 11/04/2019] [Indexed: 11/15/2022] Open
Abstract
In this study, we examined the operation of first- and second-order motion mechanisms with respect to object tracking using dichoptic presentation. A bistable apparent motion stimulus composed of four rectangles arranged in square- and diamond-shapes in every other frame was presented binocularly, monocularly, or dichoptically using a stereoscope. Since past motion studies showed that the first-order motion mechanism cannot function under dichoptic stimulation, we evaluated the upper temporal frequency limits of object tracking with dichoptic presentation and compared these results with those obtained with ordinary binocular or monocular (nondichoptic) presentation. We found that the temporal limits were 4 -5 Hz, regardless of the viewing conditions. These limits are similar to those for within-attribute (first- and second-order) object tracking (4 -5 Hz) obtained in our previous study. Thus, this putative mechanism may be responsible for object tracking, based only on second-order components, even in the case of first-order stimuli.
Collapse
|
20
|
Lu VT, Wright CE, Chubb C, Sperling G. Variation in target and distractor heterogeneity impacts performance in the centroid task. J Vis 2019; 19:21. [PMID: 30998831 DOI: 10.1167/19.4.21] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
In a selective centroid task, the participant views a brief cloud of items of different types-some of which are targets, the others distractors-and strives to mouse-click the centroid of the target items, ignoring the distractors. Advantages of the centroid task are that multiple target types can appear in the same display and that influence functions, which estimate the weight of each stimulus type in the cloud on the perceived centroid for each participant, can be obtained easily and efficiently. Here we document the strong, negative impact on performance that results when the participant is instructed to attend to target dots that consist of two or more levels of a single feature dimension, even when those levels differ categorically from those of the distractor dots. The results also show a smaller, but still observable decrement in performance that results when there is heterogeneity in the distractor dots.
Collapse
Affiliation(s)
- Vivian T Lu
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, USA
| | - Charles E Wright
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, USA
| | - Charles Chubb
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, USA
| | - George Sperling
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, USA
| |
Collapse
|
21
|
Bartlett LK, Graf EW, Hedger N, Adams WJ. Motion adaptation and attention: A critical review and meta-analysis. Neurosci Biobehav Rev 2018; 96:290-301. [PMID: 30355521 DOI: 10.1016/j.neubiorev.2018.10.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 09/17/2018] [Accepted: 10/18/2018] [Indexed: 11/30/2022]
Abstract
The motion aftereffect (MAE) provides a behavioural probe into the mechanisms underlying motion perception, and has been used to study the effects of attention on motion processing. Visual attention can enhance detection and discrimination of selected visual signals. However, the relationship between attention and motion processing remains contentious: not all studies find that attention increases MAEs. Our meta-analysis reveals several factors that explain superficially discrepant findings. Across studies (37 independent samples, 76 effects) motion adaptation was significantly and substantially enhanced by attention (Cohen's d = 1.12, p < .0001). The effect more than doubled when adapting to translating (vs. expanding or rotating) motion. Other factors affecting the attention-MAE relationship included stimulus size, eccentricity and speed. By considering these behavioural analyses alongside neurophysiological work, we conclude that feature-based (rather than spatial, or object-based) attention is the biggest driver of sensory adaptation. Comparisons between naïve and non-naïve observers, different response paradigms, and assessment of 'file-drawer effects' indicate that neither response bias nor publication bias are likely to have significantly inflated the estimated effect of attention.
Collapse
Affiliation(s)
- Laura K Bartlett
- School of Psychology, FELS, University of Southampton, Southampton, SO17 1BJ, England, UK
| | - Erich W Graf
- School of Psychology, FELS, University of Southampton, Southampton, SO17 1BJ, England, UK
| | - Nicholas Hedger
- School of Psychology, FELS, University of Southampton, Southampton, SO17 1BJ, England, UK
| | - Wendy J Adams
- School of Psychology, FELS, University of Southampton, Southampton, SO17 1BJ, England, UK.
| |
Collapse
|
22
|
Ma Z, Watamaniuk SNJ, Heinen SJ. Illusory motion reveals velocity matching, not foveation, drives smooth pursuit of large objects. J Vis 2017; 17:20. [PMID: 29090315 PMCID: PMC5665499 DOI: 10.1167/17.12.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object's centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference. We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets.
Collapse
Affiliation(s)
- Zheng Ma
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | | | - Stephen J Heinen
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| |
Collapse
|
23
|
Parhizi B, Daliri MR, Behroozi M. Decoding the different states of visual attention using functional and effective connectivity features in fMRI data. Cogn Neurodyn 2017; 12:157-170. [PMID: 29564025 DOI: 10.1007/s11571-017-9461-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2017] [Revised: 10/29/2017] [Accepted: 11/17/2017] [Indexed: 11/24/2022] Open
Abstract
The present paper concentrates on the impact of visual attention task on structure of the brain functional and effective connectivity networks using coherence and Granger causality methods. Since most studies used correlation method and resting-state functional connectivity, the task-based approach was selected for this experiment to boost our knowledge of spatial and feature-based attention. In the present study, the whole brain was divided into 82 sub-regions based on Brodmann areas. The coherence and Granger causality were applied to construct functional and effective connectivity matrices. These matrices were converted into graphs using a threshold, and the graph theory measures were calculated from it including degree and characteristic path length. Visual attention was found to reveal more information during the spatial-based task. The degree was higher while performing a spatial-based task, whereas characteristic path length was lower in the spatial-based task in both functional and effective connectivity. Primary and secondary visual cortex (17 and 18 Brodmann areas) were highly connected to parietal and prefrontal cortex while doing visual attention task. Whole brain connectivity was also calculated in both functional and effective connectivity. Our results reveal that Brodmann areas of 17, 18, 19, 46, 3 and 4 had a significant role proving that somatosensory, parietal and prefrontal regions along with visual cortex were highly connected to other parts of the cortex during the visual attention task. Characteristic path length results indicated an increase in functional connectivity and more functional integration in spatial-based attention compared with feature-based attention. The results of this work can provide useful information about the mechanism of visual attention at the network level.
Collapse
Affiliation(s)
- Behdad Parhizi
- 1Neuroscience and Neuroengineering Research Laboratory, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Tehran, Iran
| | - Mohammad Reza Daliri
- 1Neuroscience and Neuroengineering Research Laboratory, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Tehran, Iran
| | - Mehdi Behroozi
- 2School of Cognitive Sciences (SCS), Institute for Research in Fundamental Science (IPM), Niavaran, Tehran, Iran.,3Department of Biopsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
24
|
Abstract
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them.
Collapse
Affiliation(s)
- Eyal I Nitzany
- Program in Computational Biology & Medicine, Cornell University, Ithaca, NY, USAFeil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, USADepartment of Organismal Biology and Anatomy, University of Chicago, Chicago, IL,
| | - Maren E Loe
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL,
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy and Committee on Computational Neuroscience, University of Chicago, Chicago, IL, ://pondside.uchicago.edu/oba/faculty/palmer_s.html
| | - Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York City, NY, ://www-users.med.cornell.edu/~jdvicto/jdvonweb.html
| |
Collapse
|
25
|
Norcia AM, Pei F, Kohler PJ. Evidence for long-range spatiotemporal interactions in infant and adult visual cortex. J Vis 2017. [PMID: 28622700 PMCID: PMC5477630 DOI: 10.1167/17.6.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The development of spatiotemporal interactions giving rise to classical receptive field properties has been well studied in animal models, but little is known about the development of putative nonclassical mechanisms in any species. Here we used visual evoked potentials to study the developmental status of spatiotemporal interactions for stimuli that were biased to engage long-range spatiotemporal integration mechanisms. We compared responses to widely spaced stimuli presented either in temporal succession or at the same time. The former configuration elicits a percept of apparent motion in adults but the latter does not. Component flash responses were summed to make a linear prediction (no spatiotemporal interaction) for comparison with the measured evoked responses to sequential or simultaneous flash conditions. In adults, linear summation of the separate flash responses measured with 40% contrast stimuli predicted sequential flash responses twice as large as those measured, indicating that the response measured under apparent motion conditions is subadditive. Simultaneous-flash responses at the same spatial separation were also subadditive, but substantially less so. The subadditivity in both cases could be modeled as a simple multiplicative gain term across all electrodes and time points. In infants aged 3-8 months, responses to the stimuli used in adults were similar to their linear predictions at 40%, but the responses measured at 80% contrast resembled the subadditive responses of the adults for both sequential and simultaneous flash conditions. We interpret the developmental data as indicating that adult-like long-range spatiotemporal interactions can be demonstrated by 3-8 months, once stimulus contrast is high enough.
Collapse
Affiliation(s)
- Anthony M Norcia
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Francesca Pei
- Department of Psychology, Stanford University, Stanford, CA, USADepartment of Psychiatry, Stanford University, Stanford, CA, USA
| | - Peter J Kohler
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
26
|
Zhu JE, Ma WJ. Orientation-dependent biases in length judgments of isolated stimuli. J Vis 2017; 17:20. [PMID: 28245499 DOI: 10.1167/17.2.20] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Vertical line segments tend to be perceived as longer than horizontal ones of the same length, but this may in part be due to configuration effects. To minimize such effects, we used isolated line segments in a two-interval, forced choice paradigm, not limiting ourselves to horizontal and vertical. We fitted psychometric curves using a Bayesian method that assumes that, for a given subject, the lapse rate is the same across all conditions. The closer a line segment's orientation was to vertical, the longer it was perceived to be. Moreover, subjects tended to report the standard line (in the second interval) as longer. The data were well described by a model that contains both an orientation-dependent and an interval-dependent multiplicative bias. Using this model, we estimated that a vertical line was on average perceived as 9.2% ± 2.1% longer than a horizontal line, and a second-interval line was on average perceived as 2.4% ± 0.9% longer than a first-interval line. Moving from a descriptive to an explanatory model, we hypothesized that anisotropy in the polar angle of lines in three dimensions underlies the horizontal-vertical illusion, specifically, that line segments more often have a polar angle of 90° (corresponding to the ground plane) than any other polar angle. This model qualitatively accounts not only for the empirical relationship between projected length and projected orientation that predicts the horizontal-vertical illusion, but also for the empirical distribution of projected orientation in photographs of natural scenes and for paradoxical results reported earlier for slanted surfaces.
Collapse
Affiliation(s)
- Jielei Emma Zhu
- Center for Neural Science and Department of Psychology, New York University, New York, NY,
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY,
| |
Collapse
|
27
|
Agosta S, Magnago D, Tyler S, Grossman E, Galante E, Ferraro F, Mazzini N, Miceli G, Battelli L. The Pivotal Role of the Right Parietal Lobe in Temporal Attention. J Cogn Neurosci 2016; 29:805-815. [PMID: 27991181 DOI: 10.1162/jocn_a_01086] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.
Collapse
Affiliation(s)
- Sara Agosta
- Instituto Italiano di Tecnologia, Rovereto, Italy
| | | | - Sarah Tyler
- Instituto Italiano di Tecnologia, Rovereto, Italy.,University of California, Irvine
| | | | | | | | - Nunzia Mazzini
- Ospedale Riabilitativo Villa Rosa, Pergine Valsugana, Trento, Italy
| | | | - Lorella Battelli
- Instituto Italiano di Tecnologia, Rovereto, Italy.,Harvard Medical School, Boston, MA
| |
Collapse
|
28
|
Huddleston WE, DeYoe EA. First-Order and Second-Order Spectral ‘Motion’ Mechanisms in the Human Auditory System. Perception 2016; 32:1141-9. [PMID: 14651326 DOI: 10.1068/p5077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Light energy displaced along the retinal photoreceptor array leads to a perception of visual motion. In audition, displacement of mechanical energy along the cochlear hair cell array is conceptually similar but leads to a perception of ‘movement’ in frequency space (spectral motion)—a rising or falling pitch. In vision there are other types of stimuli that also evoke a percept of motion but do not involve a displacement of energy across the photoreceptors (second-order stimuli). In this study, we used psychophysical methods to determine if such second-order stimuli also exist in audition, and if the resulting percept would rival that of first-order spectral motion. First-order auditory stimuli consisted of a frequency sweep of sixteen non-harmonic tones between 297 and 12123 Hz. Second-order stimuli consisted of the same tones, but with a random subset turned on at the beginning of a trial. During the trial, each tone in sequence randomly changed state (ON-to-OFF, or OFF-to-ON). Thus, state transitions created a ‘sweep’ having no net energy displacement correlated to the sweep direction. At relatively slow sweep speeds, subjects readily identified the sweep direction for both first-order and second-order stimuli, though accuracy decreased for second-order stimuli as the sweep speed increased. This latter characteristic is also true of some second-order visual stimuli. These results suggest a stronger parallelism between auditory and visual processing than previously appreciated.
Collapse
Affiliation(s)
- Wendy E Huddleston
- Department of Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, USA
| | | |
Collapse
|
29
|
Nonlinear dynamics in the perceptual grouping of connected surfaces. Vision Res 2016; 126:80-96. [DOI: 10.1016/j.visres.2015.06.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 05/30/2015] [Accepted: 06/07/2015] [Indexed: 11/20/2022]
|
30
|
Clarke AM, Öğmen H, Herzog MH. A computational model for reference-frame synthesis with applications to motion perception. Vision Res 2016; 126:242-253. [DOI: 10.1016/j.visres.2015.08.018] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Revised: 08/25/2015] [Accepted: 08/28/2015] [Indexed: 10/22/2022]
|
31
|
Abstract
UNLABELLED Humans can learn to abstract and conceptualize the shared visual features defining an object category in object learning. Therefore, learning is generalizable to transformations of familiar objects and even to new objects that differ in other physical properties. In contrast, visual perceptual learning (VPL), improvement in discriminating fine differences of a basic visual feature through training, is commonly regarded as specific and low-level learning because the improvement often disappears when the trained stimulus is simply relocated or rotated in the visual field. Such location and orientation specificity is taken as evidence for neural plasticity in primary visual cortex (V1) or improved readout of V1 signals. However, new training methods have shown complete VPL transfer across stimulus locations and orientations, suggesting the involvement of high-level cognitive processes. Here we report that VPL bears similar properties of object learning. Specifically, we found that orientation discrimination learning is completely transferrable between luminance gratings initially encoded in V1 and bilaterally symmetric dot patterns encoded in higher visual cortex. Similarly, motion direction discrimination learning is transferable between first- and second-order motion signals. These results suggest that VPL can take place at a conceptual level and generalize to stimuli with different physical properties. Our findings thus reconcile perceptual and object learning into a unified framework. SIGNIFICANCE STATEMENT Training in object recognition can produce a learning effect that is applicable to new viewing conditions or even to new objects with different physical properties. However, perceptual learning has long been regarded as a low-level form of learning because of its specificity to the trained stimulus conditions. Here we demonstrate with new training tactics that visual perceptual learning is completely transferrable between distinct physical stimuli. This finding indicates that perceptual learning also operates at a conceptual level in a stimulus-invariant manner.
Collapse
|
32
|
Schiller PH, Carvey CE. Demonstrations of Spatiotemporal Integration and what they Tell us about the Visual System. Perception 2016; 35:1521-55. [PMID: 17286122 DOI: 10.1068/p5564] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Five sets of displays are presented on the journal website to be viewed in conjunction with the text. We concentrate on the factors that give rise to the integration and disruption of the direction of apparent motion in two-dimensional and three-dimensional space. In the first set of displays we examine what factors contribute to the integration and disruption of apparent motion in the Ramachandran/Anstis clustered bistable quartets. In the second set we examine what factors give rise to the perception of the direction of motion in rotating two-dimensional wheels and dots. In the third and fourth sets we examine how the depth cues of shading and disparity contribute to the perception of apparent motion of opaque displays, and to the perception of rotating unoccluded displays, respectively. In the fifth set we examine how the depth cue of motion parallax influences the perception of apparent motion. Throughout, we make inferences about the roles which various parallel pathways and cortical areas play in the perceptions produced by the displays shown.
Collapse
Affiliation(s)
- Peter H Schiller
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | |
Collapse
|
33
|
Velocity Selective Networks in Human Cortex Reveal Two Functionally Distinct Auditory Motion Systems. PLoS One 2016; 11:e0157131. [PMID: 27294673 PMCID: PMC4905637 DOI: 10.1371/journal.pone.0157131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Accepted: 05/25/2016] [Indexed: 12/02/2022] Open
Abstract
The auditory system encounters motion cues through an acoustic object’s movement or rotation of the listener’s head in a stationary sound field, generating a wide range of naturally occurring velocities from a few to several hundred degrees per second. The angular velocity of moving acoustic objects relative to a listener is typically slow and does not exceed tens of degrees per second, whereas head rotations in a stationary acoustic field may generate fast-changing spatial cues in the order of several hundred degrees per second. We hypothesized that these two types of systems (i.e., encoding slow movements of an object or fast head rotations) may engage functionally distinct substrates in processing spatially dynamic auditory cues, with the latter potentially involved in maintaining perceptual constancy in a stationary field during head rotations and therefore possibly involving corollary-discharge mechanisms in premotor cortex. Using fMRI, we examined cortical response patterns to sound sources moving at a wide range of velocities in 3D virtual auditory space. We found a significant categorical difference between fast and slow moving sounds, with stronger activations in response to higher velocities in the posterior superior temporal regions, the planum temporale, and notably the premotor ventral-rostral (PMVr) area implicated in planning neck and head motor functions.
Collapse
|
34
|
Quaia C, Optican LM, Cumming BG. A Motion-from-Form Mechanism Contributes to Extracting Pattern Motion from Plaids. J Neurosci 2016; 36:3903-18. [PMID: 27053199 PMCID: PMC4821905 DOI: 10.1523/jneurosci.3398-15.2016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 02/22/2016] [Accepted: 02/24/2016] [Indexed: 11/21/2022] Open
Abstract
Since the discovery of neurons selective for pattern motion direction in primate middle temporal area MT (Albright, 1984; Movshon et al., 1985), the neural computation of this signal has been the subject of intense study. The bulk of this work has explored responses to plaids obtained by summing two drifting sinusoidal gratings. Unfortunately, with these stimuli, many different mechanisms are similarly effective at extracting pattern motion. We devised a new set of stimuli, obtained by summing two random line stimuli with different orientations. This allowed several novel manipulations, including generating plaids that do not contain rigid 2D motion. Importantly, these stimuli do not engage most of the previously proposed mechanisms. We then recorded the ocular following responses that such stimuli induce in human subjects. We found that pattern motion is computed even with stimuli that do not cohere perceptually, including those without rigid motion, and even when the two gratings are presented separately to the two eyes. Moderate temporal and/or spatial separation of the gratings impairs the computation. We show that, of the models proposed so far, only those based on the intersection-of-constraints rule, embedding a motion-from-form mechanism (in which orientation signals are used in the computation of motion direction signals), can account for our results. At least for the eye movements reported here, a motion-from-form mechanism is thus involved in one of the most basic functions of the visual motion system: extracting motion direction from complex scenes. SIGNIFICANCE STATEMENT Anatomical considerations led to the proposal that visual function is organized in separate processing streams: one (ventral) devoted to form and one (dorsal) devoted to motion. Several experimental results have challenged this view, arguing in favor of a more integrated view of visual processing. Here we add to this body of work, supporting a role for form information even in a function--extracting pattern motion direction from complex scenes--for which decisive evidence for the involvement of form signals has been lacking.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| |
Collapse
|
35
|
Wang S, Jin K, Lu H, Cheng C, Ye J, Qian D. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1046-1055. [PMID: 26672033 DOI: 10.1109/tmi.2015.2506902] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.
Collapse
|
36
|
Razaak M, Martini MG. CUQI: cardiac ultrasound video quality index. J Med Imaging (Bellingham) 2016; 3:011011. [PMID: 27014715 DOI: 10.1117/1.jmi.3.1.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Accepted: 02/16/2016] [Indexed: 11/14/2022] Open
Abstract
Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of "diagnostic" value rather than "perceptual" quality is more important. We present a diagnostic-quality-oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests.
Collapse
Affiliation(s)
- Manzoor Razaak
- Kingston University , Wireless and Multimedia Networking Research Group, Penhryn Road, Kingston upon Thames, KT1 2EE, London, United Kingdom
| | - Maria G Martini
- Kingston University , Wireless and Multimedia Networking Research Group, Penhryn Road, Kingston upon Thames, KT1 2EE, London, United Kingdom
| |
Collapse
|
37
|
Allard R, Faubert J. The Role of Feature Tracking in the Furrow Illusion. Front Hum Neurosci 2016; 10:81. [PMID: 27014018 PMCID: PMC4779897 DOI: 10.3389/fnhum.2016.00081] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 02/17/2016] [Indexed: 11/13/2022] Open
Affiliation(s)
- Rémy Allard
- Sorbonne Universités, Pierre and Marie Curie University Paris 06, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la VisionParis, France
- *Correspondence: Rémy Allard
| | - Jocelyn Faubert
- Visual Psychophysics and Perception Laboratory, Université de MontréalMontréal, QC, Canada
| |
Collapse
|
38
|
Abstract
A reference frame is required to specify how motion is perceived. For example, the motion of part of an object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38, 379-393, 1976) proposed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means of which the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset of solutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set of psychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects of motion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective of the reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames.
Collapse
|
39
|
Piponnier JC, Forget R, Gagnon I, McKerral M, Giguère JF, Faubert J. First- and Second-Order Stimuli Reaction Time Measures Are Highly Sensitive to Mild Traumatic Brain Injuries. J Neurotrauma 2015; 33:242-53. [PMID: 25950948 DOI: 10.1089/neu.2014.3832] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Mild traumatic brain injury (mTBI) has subtle effects on several brain functions that can be difficult to assess and follow up. We investigated the impact of mTBI on the perception of sine-wave gratings defined by first- and second-order characteristics. Fifteen adults diagnosed with mTBI were assessed at 15 days, 3 months, and 12 months postinjury. Fifteen matched controls followed the same testing schedule. Reaction times (RTs) for flicker detection and motion direction discrimination were measured. Stimulus contrast of first- and second-order patterns was equated to control for visibility, and correct-response RT means, standard deviations (SDs), medians, and interquartile ranges (IQRs) were calculated. The level of symptoms was also evaluated to compare it to RT data. In general in mTBI, RTs were longer, and SDs as well as IQRs larger, than those of controls. In addition, mTBI participants' RTs to first-order stimuli were shorter than those to second-order stimuli, and SDs as well as IQRs larger for first- than for second-order stimuli in the motion condition. All these observations were made over the three sessions. The level of symptoms observed in mTBI was higher than that of control participants, and this difference did also persist up to 1 year after the brain injury, despite an improvement. The combination of RT measures with particular stimulus properties is a highly sensitive method for measuring mTBI-induced visuomotor anomalies and provides a fine probe of the underlying mechanisms when the brain is exposed to mild trauma.
Collapse
Affiliation(s)
- Jean-Claude Piponnier
- 1 Visual Psychophysics and Perception Laboratory, École d'Optométrie, Université de Montréal , Montréal, QC, Canada
| | - Robert Forget
- 2 École de réadaptation, Université de Montréal , and Centre de recherche interdisciplinaire en réadaptation du Montréal métropolitain, Montréal, QC, Canada
| | - Isabelle Gagnon
- 3 Montreal Children's Hospital, McGill University Health Center, and School of Physical and Occupational Therapy, McGill University , Montreal, Montréal, QC, Canada
| | - Michelle McKerral
- 4 Centre de recherche interdisciplinaire en réadaptation-Centre de réadaptation Lucie-Bruneau, and Département de psychologie, Université de Montréal , Montréal, QC, Canada
| | - Jean-François Giguère
- 5 Department of Surgery, Sacré-Coeur Hospital affiliated with Université de Montréal , Montréal, QC, Canada
| | - Jocelyn Faubert
- 1 Visual Psychophysics and Perception Laboratory, École d'Optométrie, Université de Montréal , Montréal, QC, Canada
| |
Collapse
|
40
|
Abstract
To judge the overall direction of a shoal of fish or a crowd of people, observers must integrate motion signals across space and time. The limits on our ability to pool motion have largely been established using the motion coherence paradigm, in which observers report the direction of coherently moving dots amid randomly moving noise dots. Poor performance by autistic individuals on this task has widely been interpreted as evidence of disrupted integrative processes. Critically, however, motion coherence thresholds are not necessarily limited only by pooling. They could also be limited by imprecision in estimating the direction of individual elements or by difficulties segregating signal from noise. Here, 33 children with autism 6-13 years of age and 33 age- and ability-matched typical children performed a more robust task reporting mean dot direction both in the presence and the absence of directional variability alongside a standard motion coherence task. Children with autism were just as sensitive to directional differences as typical children when all elements moved in the same direction (no variability). However, remarkably, children with autism were more sensitive to the average direction in the presence of directional variability, providing the first evidence of enhanced motion integration in autism. Despite this improved averaging ability, children with autism performed comparably to typical children in the motion coherence task, suggesting that their motion coherence thresholds may be limited by reduced segregation of signal from noise. Although potentially advantageous under some conditions, increased integration may lead to feelings of "sensory overload" in children with autism.
Collapse
|
41
|
Jaekl P, Pesquita A, Alsius A, Munhall K, Soto-Faraco S. The contribution of dynamic visual cues to audiovisual speech perception. Neuropsychologia 2015; 75:402-10. [PMID: 26100561 DOI: 10.1016/j.neuropsychologia.2015.06.025] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Revised: 06/11/2015] [Accepted: 06/18/2015] [Indexed: 11/19/2022]
Abstract
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.
Collapse
Affiliation(s)
- Philip Jaekl
- Center for Visual Science and Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
| | - Ana Pesquita
- UBC Vision Lab, Department of Psychology, University of British Colombia, Vancouver, BC, Canada
| | - Agnes Alsius
- Department of Psychology, Queen's University, Kingston, ON, Canada
| | - Kevin Munhall
- Department of Psychology, Queen's University, Kingston, ON, Canada
| | - Salvador Soto-Faraco
- Centre for Brain and Cognition, Department of Information Technology and Communications, Universitat Pompeu Fabra, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Spain
| |
Collapse
|
42
|
Abstract
Despite growing evidence for perceptual interactions between motion and position, no unifying framework exists to account for these two key features of our visual experience. We show that percepts of both object position and motion derive from a common object-tracking system--a system that optimally integrates sensory signals with a realistic model of motion dynamics, effectively inferring their generative causes. The object-tracking model provides an excellent fit to both position and motion judgments in simple stimuli. With no changes in model parameters, the same model also accounts for subjects' novel illusory percepts in more complex moving stimuli. The resulting framework is characterized by a strong bidirectional coupling between position and motion estimates and provides a rational, unifying account of a number of motion and position phenomena that are currently thought to arise from independent mechanisms. This includes motion-induced shifts in perceived position, perceptual slow-speed biases, slowing of motions shown in visual periphery, and the well-known curveball illusion. These results reveal that motion perception cannot be isolated from position signals. Even in the simplest displays with no changes in object position, our perception is driven by the output of an object-tracking system that rationally infers different generative causes of motion signals. Taken together, we show that object tracking plays a fundamental role in perception of visual motion and position.
Collapse
|
43
|
Abstract
Several psychophysical studies of visual short-term memory (VSTM) have shown high-fidelity storage capacity for many properties of visual stimuli. On judgments of the spatial frequency of gratings, for example, discrimination performance does not decrease significantly, even for memory intervals of up to 30 s. For other properties, such as stimulus orientation and contrast, however, such “perfect storage” behavior is not found, although the reasons for this difference remain unresolved. Here, we report two experiments in which we investigated the nature of the representation of stimulus contrast in VSTM using spatially complex, two-dimensional random-noise stimuli. We addressed whether information about contrast per se is retained during the memory interval by using a test stimulus with the same spatial structure but either the same or the opposite local contrast polarity, with respect to the comparison (i.e., remembered) stimulus. We found that discrimination thresholds got steadily worse with increasing duration of the memory interval. Furthermore, performance was better when the test and comparison stimuli had the same local contrast polarity than when they were contrast-reversed. Finally, when a noise mask was introduced during the memory interval, its disruptive effect was maximal when the spatial configuration of its constituent elements was uncorrelated with those of the comparison and test stimuli. These results suggest that VSTM for contrast is closely tied to the spatial configuration of stimuli and is not transformed into a more abstract representation.
Collapse
|
44
|
Sun P, Chubb C, Sperling G. Two mechanisms that determine the Barber-Pole Illusion. Vision Res 2015; 111:43-54. [PMID: 25872181 DOI: 10.1016/j.visres.2015.04.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 04/05/2015] [Accepted: 04/06/2015] [Indexed: 11/18/2022]
Abstract
UNLABELLED In the Barber-Pole Illusion (BPI), a diagonally moving grating is perceived as moving vertically because of the narrow, vertical, rectangular shape of the aperture window through which it is viewed. This strong shape-motion interaction persists through a wide range of parametric variations in the shape of the window, the spatial and temporal frequencies of the moving grating, the contrast of the moving grating, complex variations in the composition of the grating and window shape, and the duration of viewing. It is widely believed that end-stop-feature (third-order) motion computations determine the BPI, and that Fourier motion-energy (first-order) computations determine failures of the BPI. Here we show that the BPI is more complex: (1) In a wide variety of conditions, weak-feature stimuli (extremely fast, low contrast gratings, 21.5 Hz, 4% contrast) that stimulate only the Fourier (first-order) motion system actually produce a slightly better BPI illusion than classical strong-feature gratings (2.75 Hz, 32% contrast). (2) Reverse-phi barber-pole stimuli are seen exclusively in the feature (third-order) BPI direction when presented at 2.75 Hz and exclusively in the opposite (Fourier, first-order) BPI direction at 21.5Hz, indicating that both the first- and the third-order systems can produce the BPI. (3) The BPI in barber poles with scalloped aperture boundaries is much weaker than in normal straight-edge barber poles for 2.75 Hz stimuli but not in 21.5 Hz stimuli. CONCLUSIONS Both first-order and third-order stimuli produce strong BPIs. In some stimuli, local Fourier motion-energy (first-order) produces the BPI via a subsequent motion-path-integration computation (Journal of Vision (2014) 14, 1--27); in other stimuli, the BPI is determined by various feature (third-order) motion inputs; in most stimuli, the BPI involves combinations of both. High temporal frequency, low-contrast stimuli favor the first-order motion-path-integration computation; low temporal frequency, high-contrast stimuli favor third-order motion computations.
Collapse
Affiliation(s)
- Peng Sun
- Department of Cognitive Sciences, University of California Irvine, Irvine, CA 92617, United States; Department of Psychology, New York University, New York, NY 10003, United States.
| | - Charles Chubb
- Department of Cognitive Sciences, University of California Irvine, Irvine, CA 92617, United States
| | - George Sperling
- Department of Cognitive Sciences, University of California Irvine, Irvine, CA 92617, United States
| |
Collapse
|
45
|
Tyler SC, Dasgupta S, Agosta S, Battelli L, Grossman ED. Functional connectivity of parietal cortex during temporal selective attention. Cortex 2015; 65:195-207. [DOI: 10.1016/j.cortex.2015.01.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Revised: 10/17/2014] [Accepted: 01/20/2015] [Indexed: 11/29/2022]
|
46
|
Nohara S, Kawano K, Miura K. Difference in perceptual and oculomotor responses revealed by apparent motion stimuli presented with an interstimulus interval. J Neurophysiol 2015; 113:3219-28. [PMID: 25810485 DOI: 10.1152/jn.00647.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2014] [Accepted: 03/12/2015] [Indexed: 11/22/2022] Open
Abstract
To understand the mechanisms underlying visual motion analyses for perceptual and oculomotor responses and their similarities/differences, we analyzed eye movement responses to two-frame animations of dual-grating 3f5f stimuli while subjects performed direction discrimination tasks. The 3f5f stimulus was composed of two sinusoids with a spatial frequency ratio of 3:5 (3f and 5f), creating a pattern with fundamental frequency f. When this stimulus was shifted by 1/4 of the wavelength, the two components shifted 1/4 of their wavelengths and had opposite directions: the 5f forward and the 3f backward. By presenting the 3f5f stimulus with various interstimulus intervals (ISIs), two visual-motion-analysis mechanisms, low-level energy-based and high-level feature-based, could be effectively distinguished. This is because response direction depends on the relative contrast between the components when the energy-based mechanism operates, but not when the feature-based mechanism works. We found that when the 3f5f stimuli were presented with shorter ISIs (<100 ms), and 3f component had higher contrast, both perceptual and ocular responses were in the direction of the pattern shift, whereas the responses were reversed when the 5f had higher contrast, suggesting operation of the energy-based mechanism. On the other hand, the ocular responses were almost negligible with longer ISIs (>100 ms), whereas perceived directions were biased toward the direction of pattern shift. These results suggest that the energy-based mechanism is dominant in oculomotor responses throughout ISIs; however, there is a transition from energy-based to feature-tracking mechanisms when we perceive visual motion.
Collapse
Affiliation(s)
- Shizuka Nohara
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan; and Faculty of Medicine, Kyoto University, Kyoto, Japan
| | - Kenji Kawano
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan; and
| | - Kenichiro Miura
- Department of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Kyoto, Japan; and
| |
Collapse
|
47
|
Bosco G, Monache SD, Gravano S, Indovina I, La Scaleia B, Maffei V, Zago M, Lacquaniti F. Filling gaps in visual motion for target capture. Front Integr Neurosci 2015; 9:13. [PMID: 25755637 PMCID: PMC4337337 DOI: 10.3389/fnint.2015.00013] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2014] [Accepted: 01/30/2015] [Indexed: 11/17/2022] Open
Abstract
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
Collapse
Affiliation(s)
- Gianfranco Bosco
- Department of Systems Medicine, University of Rome "Tor Vergata" Rome, Italy ; Centre of Space Bio-medicine, University of Rome "Tor Vergata" Rome, Italy ; Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| | - Sergio Delle Monache
- Department of Systems Medicine, University of Rome "Tor Vergata" Rome, Italy ; Centre of Space Bio-medicine, University of Rome "Tor Vergata" Rome, Italy
| | - Silvio Gravano
- Centre of Space Bio-medicine, University of Rome "Tor Vergata" Rome, Italy ; Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| | - Iole Indovina
- Centre of Space Bio-medicine, University of Rome "Tor Vergata" Rome, Italy ; Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| | - Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| | - Vincenzo Maffei
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine, University of Rome "Tor Vergata" Rome, Italy ; Centre of Space Bio-medicine, University of Rome "Tor Vergata" Rome, Italy ; Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation Rome, Italy
| |
Collapse
|
48
|
Zhou J, Yan F, Lu ZL, Zhou Y, Xi J, Huang CB. Broad bandwidth of perceptual learning in second-order contrast modulation detection. J Vis 2015; 15:20. [PMID: 25686623 PMCID: PMC4528671 DOI: 10.1167/15.2.20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2014] [Accepted: 01/07/2015] [Indexed: 11/24/2022] Open
Abstract
Comparing characteristics of learning in first- and second-order systems might inform us about different neural plasticity in the two systems. In the current study, we aim to determine the properties of perceptual learning in second-order contrast modulation detection in normal adults. We trained nine observers to detect second-order gratings at an envelope modulation spatial frequency of 8 cycles/° with their nondominant eyes. We found that, although training generated the largest improvements around the trained frequency, contrast sensitivity over a broad range of spatial frequencies also improved, with a 4.09-octave bandwidth of perceptual learning, exhibiting specificity to the trained spatial frequency as well as a relatively large degree of generalization. The improvements in the modulation sensitivity function (MSF) were not significantly different between the trained and untrained eyes. Furthermore, training did not significantly change subjects' ability in detecting first-order gratings. Our results suggest that perceptual learning in second-order detection might occur at the postchannel level in binocular neurons, possibly through reducing the internal noise of the visual system.
Collapse
Affiliation(s)
- Jiawei Zhou
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China, Key Laboratory of Brain Function and Diseases, School of Life Sciences, University of Science and Technology of China, Hefei, China
| | - Fangfang Yan
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China, University of Chinese Academy of Sciences, Beijing, China
| | - Zhong-Lin Lu
- Laboratory of Brain Processes, Department of Psychology, Ohio State University, Columbus, OH, USA
| | - Yifeng Zhou
- Key Laboratory of Brain Function and Diseases, School of Life Sciences, University of Science and Technology of China, Hefei, China
| | - Jie Xi
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Chang-Bing Huang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
49
|
Dürsteler MR. A common framework for the analysis of complex motion? Standstill and capture illusions. Front Hum Neurosci 2015; 8:999. [PMID: 25566023 PMCID: PMC4270218 DOI: 10.3389/fnhum.2014.00999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 11/24/2014] [Indexed: 12/04/2022] Open
Abstract
A series of illusions was created by presenting stimuli, which consisted of two overlapping surfaces each defined by textures of independent visual features (i.e., modulation of luminance, color, depth, etc.). When presented concurrently with a stationary 2-D luminance texture, observers often fail to perceive the motion of an overlapping stereoscopically defined depth-texture. This illusory motion standstill arises due to a failure to represent two independent surfaces (one for luminance and one for depth textures) and motion transparency (the ability to perceive motion of both surfaces simultaneously). Instead the stimulus is represented as a single non-transparent surface taking on the stationary nature of the luminance-defined texture. By contrast, if it is the 2D-luminance defined texture that is in motion, observers often perceive the stationary depth texture as also moving. In this latter case, the failure to represent the motion transparency of the two textures gives rise to illusionary motion capture. Our past work demonstrated that the illusions of motion standstill and motion capture can occur for depth-textures that are rotating, or expanding / contracting, or else spiraling. Here I extend these findings to include stereo-shearing. More importantly, it is the motion (or lack thereof) of the luminance texture that determines how the motion of the depth will be perceived. This observation is strongly in favor of a single pathway for complex motion that operates on luminance-defines texture motion signals only. In addition, these complex motion illusions arise with chromatically-defined textures with smooth transitions between their colors. This suggests that in respect to color motion perception the complex motions' pathway is only able to accurately process signals from isoluminant colored textures with sharp transitions between colors, and/or moving at high speeds, which is conceivable if it relies on inputs from a hypothetical dual opponent color pathway.
Collapse
Affiliation(s)
- Max R Dürsteler
- Vestibulo-Oculomotor Lab., Department of Neurology, University Hospital Zürich Zürich, Switzerland
| |
Collapse
|
50
|
Tang Y, Liu C, Liu Z, Hu X, Yu YQ, Zhou Y. Processing deficits of motion of contrast-modulated gratings in anisometropic amblyopia. PLoS One 2014; 9:e113400. [PMID: 25409477 PMCID: PMC4237427 DOI: 10.1371/journal.pone.0113400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Accepted: 10/23/2014] [Indexed: 12/02/2022] Open
Abstract
Several studies have indicated substantial processing deficits for static second-order stimuli in amblyopia. However, less is known about the perception of second-order moving gratings. To investigate this issue, we measured the contrast sensitivity for second-order (contrast-modulated) moving gratings in seven anisometropic amblyopes and ten normal controls. The measurements were performed with non-equated carriers and a series of equated carriers. For comparison, the sensitivity for first-order motion and static second-order stimuli was also measured. Most of the amblyopic eyes (AEs) showed reduced sensitivity for second-order moving gratings relative to their non-amblyopic eyes (NAEs) and the dominant eyes (CEs) of normal control subjects, even when the detectability of the noise carriers was carefully controlled, suggesting substantial processing deficits of motion of contrast-modulated gratings in anisometropic amblyopia. In contrast, the non-amblyopic eyes of the anisometropic amblyopes were relatively spared. As a group, NAEs showed statistically comparable performance to CEs. We also found that contrast sensitivity for static second-order stimuli was strongly impaired in AEs and part of the NAEs of anisometropic amblyopes, consistent with previous studies. In addition, some amblyopes showed impaired performance in perception of static second-order stimuli but not in that of second-order moving gratings. These results may suggest a dissociation between the processing of static and moving second-order gratings in anisometropic amblyopia.
Collapse
Affiliation(s)
- Yong Tang
- CAS Key Laboratory of Brain Function and Diseases, and School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
- Research and Treatment Center of Amblyopia and Strabismus, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Caiyuan Liu
- Research and Treatment Center of Amblyopia and Strabismus, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Zhongjian Liu
- Research and Treatment Center of Amblyopia and Strabismus, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Xiaopeng Hu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, People's Republic of China
| | - Yong-Qiang Yu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, People's Republic of China
| | - Yifeng Zhou
- CAS Key Laboratory of Brain Function and Diseases, and School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
- Research and Treatment Center of Amblyopia and Strabismus, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing, People's Republic of China
| |
Collapse
|