1
|
Nakada H, Murakami I. Local motion signals silence the perceptual solution of global apparent motion. J Vis 2023; 23:12. [PMID: 37378990 DOI: 10.1167/jov.23.6.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2023] Open
Abstract
Stimuli for apparent motion can have ambiguity in frame-to-frame correspondences among visual elements. This occurs when visual inputs cause a correspondence problem that allows multiple alternatives of perceptual solutions. Herein we examined the influence of local visual motions on a perceptual solution under such a multistable situation. We repeatedly alternated two frames of stimuli in a circular configuration in which discrete elements in two different colors alternated in space and switched their colors frame by frame. These stimuli were compatible with three perceptual solutions: globally consistent clockwise and counterclockwise rotations and color flickers at the same locations without such global apparent motion. We added a sinusoidal grating continuously drifting within each element to examine whether the perceptual solution for the global apparent motion was affected by the local continuous motions. We found that the local motions suppressed global apparent motion and promoted another perceptual solution that the local elements were only flickering between the two colors and drifting within static windows. It was concluded that local continuous motions as counterevidence against global apparent motion contributed to individuating visual objects and integrating visual features for maintaining object identity at the same location.
Collapse
Affiliation(s)
- Hoko Nakada
- Department of Psychology, The University of Tokyo, Tokyo, Japan
| | - Ikuya Murakami
- Department of Psychology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
2
|
Pichlmeier S, Pfeiffer T. Attentional capture in multiple object tracking. J Vis 2021; 21:16. [PMID: 34379083 PMCID: PMC8363777 DOI: 10.1167/jov.21.8.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Attentional processes are generally assumed to be involved in multiple object tracking (MOT). The attentional capture paradigm is regularly used to study conditions of attentional control. It has up to now not been used to assess influences of sudden onset distractor stimuli in MOT. We investigated whether attentional capture does occur in MOT: Are onset distractors processed at all in dynamic attentional tasks? We found that sudden onset distractors were effective in lowering probe detection, thus demonstrating attentional capture. Tracking performance as dependent measure was not affected. The attentional capture effect persisted in conditions of higher tracking load (Experiment 2) and was dramatically increased in lower presentation frequency of the onset distractor (Experiment 3). Tracking performance was shown to suffer only when onset distractors were presented serially with very short time gaps in between, thus effectively disturbing re-engaging attention on the tracking set (Experiment 4). We discuss that rapid dis- and re-engagement of the attention process on target objects and an additional more basic process that continuously provides location information allow managing strong disruptions of attention during tracking.
Collapse
Affiliation(s)
- Sebastian Pichlmeier
- Institute of Psychology, Karlsruhe University of Education, Karlsruhe, Germany.,
| | - Till Pfeiffer
- Institute of Psychology, Karlsruhe University of Education, Karlsruhe, Germany.,
| |
Collapse
|
3
|
Abstract
Our visual system briefly retains a trace of a stimulus after it disappears. This phenomenon is known as iconic memory and its contents are thought to be temporally integrated with subsequent visual inputs to produce a single composite representation. However, there is little consensus on the temporal integration between iconic memory and subsequent visual inputs. Here, we show that iconic memory revises its contents depending upon the configuration of the newly produced single representation with particular temporal characteristics. The Poggendorff illusion, in which two collinear line segments are perceived as non-collinear by an intervening rectangle, was observed when the rectangle was presented during a period spanning from 50 ms before to 200 ms after the presentation of the line segments. The illusion was most prominent when the rectangle was presented approximately 100 to 150 ms after the line segments. Furthermore, the illusion was observed at the center of a moving object, but only when the line segments were presented before the rectangle. These results indicate that the contents of iconic memory are susceptible to the modulatory influence of subsequent visual inputs before being translated into conscious perception in a time-locked manner both in retinotopic and non-retinotopic, object-centered frames of reference.
Collapse
|
4
|
Fracasso A, Melcher D. Saccades Influence the Visibility of Targets in Rapid Stimulus Sequences: The Roles of Mislocalization, Retinal Distance and Remapping. Front Syst Neurosci 2016; 10:58. [PMID: 27445718 PMCID: PMC4924485 DOI: 10.3389/fnsys.2016.00058] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 06/13/2016] [Indexed: 11/13/2022] Open
Abstract
Briefly presented targets around the time of a saccade are mislocalized towards the saccadic landing point. This has been taken as evidence for a remapping mechanism that accompanies each eye movement, helping maintain visual stability across large retinal shifts. Previous studies have shown that spatial mislocalization is greatly diminished when trains of brief stimuli are presented at a high frequency rate, which might help to explain why mislocalization is rarely perceived in everyday viewing. Studies in the laboratory have shown that mislocalization can reduce metacontrast masking by causing target stimuli in a masking sequence to be perceived as shifted in space towards the saccadic target and thus more easily discriminated. We investigated the influence of saccades on target discrimination when target and masks were presented in a rapid serial visual presentation (RSVP), as well as with forward masking and with backward masking. In a series of experiments, we found that performance was influenced by the retinal displacement caused by the saccade itself but that an additional component of un-masking occurred even when the retinal location of target and mask was matched. These results speak in favor of a remapping mechanism that begins before the eyes start moving and continues well beyond saccadic termination.
Collapse
Affiliation(s)
- Alessio Fracasso
- Experimental Psychology, Helmholtz Institute, Utrecht University Utrecht, Netherlands
| | - David Melcher
- Center for Mind/Brain Sciences, Department of Cognitive Sciences, University of Trento Rovereto, Italy
| |
Collapse
|
5
|
Öğmen H, Herzog MH. A New Conceptualization of Human Visual Sensory-Memory. Front Psychol 2016; 7:830. [PMID: 27375519 PMCID: PMC4899472 DOI: 10.3389/fpsyg.2016.00830] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Accepted: 05/18/2016] [Indexed: 11/16/2022] Open
Abstract
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.
Collapse
Affiliation(s)
- Haluk Öğmen
- Department of Electrical and Computer Engineering, University of HoustonHouston, TX, USA; Center for Neuro-Engineering and Cognitive Science, University of HoustonHouston, TX, USA
| | - Michael H Herzog
- Laboratory of Psychophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL) Lausanne, Switzerland
| |
Collapse
|
6
|
Identifying visual targets amongst interfering distractors: Sorting out the roles of perceptual load, dilution, and attentional zoom. Atten Percept Psychophys 2016; 78:1822-38. [DOI: 10.3758/s13414-016-1149-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Abstract
Progress in magnetic resonance imaging (MRI) now makes it possible to identify the major white matter tracts in the living human brain. These tracts are important because they carry many of the signals communicated between different brain regions. MRI methods coupled with biophysical modeling can measure the tissue properties and structural features of the tracts that impact our ability to think, feel, and perceive. This review describes the fundamental ideas of the MRI methods used to identify the major white matter tracts in the living human brain.
Collapse
Affiliation(s)
- Brian A Wandell
- Department of Psychology and Stanford Neurosciences Institute, Stanford University, Stanford, California 94305;
| |
Collapse
|
8
|
Spatial properties of non-retinotopic reference frames in human vision. Vision Res 2015; 113:44-54. [PMID: 26049040 DOI: 10.1016/j.visres.2015.05.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 05/24/2015] [Accepted: 05/25/2015] [Indexed: 11/20/2022]
Abstract
Many visual attributes of a target stimulus are computed according to dynamic, non-retinotopic reference frames. For example, the motion trajectory of a reflector on a bicycle wheel is perceived as orbital, even though it is in fact cycloidal in retinal, as well as spatial coordinates. We cannot perceive the cycloidal motion because the linear motion of the bike is discounted for. In other words, the linear motion common to all bicycle components serves as a non-retinotopic reference frame, with respect to which the residual (orbital) motion of the reflector is computed. Very little is known about the underlying mechanisms involved in formation and operation of non-retinotopic reference frames. Here, we investigate spatial properties of non-retinotopic reference frames. We show that reference frames are not restricted within the boundaries of moving stimuli but extend over space. By using a variation of the Ternus-Pikler paradigm, we show that the spatial extent of a non-retinotopic reference frame is independent of the size of the inducing elements and the target position near the object boundary. While dynamic reference-frames interact with each other significantly, a static reference-frame has no effect on a dynamic one. The magnitude of interactions between two neighboring dynamic reference-frames increases as the distance between them reduces. Finally, our results indicate that the reference-frame strength is significantly attenuated if the locus of attention is shifted to the elements of the neighboring reference instead of the main reference. We suggest that these results can be conceptualized as reference frames that act and interact as fields.
Collapse
|
9
|
Nakashima R, Yokosawa K. [The role of sustained attention in shift-contingent change blindness]. SHINRIGAKU KENKYU : THE JAPANESE JOURNAL OF PSYCHOLOGY 2015; 85:603-608. [PMID: 25799873 DOI: 10.4992/jjpsy.85.14303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Previous studies of change blindness have examined the effect of temporal factors (e.g., blank duration) on attention in change detection. This study examined the effect of spatial factors (i.e., whether the locations of original and changed objects are the same or different) on attention in change detection, using a shift-contingent change blindness task. We used a flicker paradigm in which the location of a to-be-judged target image was manipulated (shift, no-shift). In shift conditions, the image of an array of objects was spatially shifted so that all objects appeared in new locations; in no-shift conditions, all object images of an array appeared at the same location. The presence of visual stimuli (dots) in the blank display between the two images was.manipulated (dot, no-dot) under the assumption that abrupt onsets of these stimuli would capture attention. Results indicated that change detection performance was improved by exogenous attentional capture in the shift condition. Thus, we suggest that attention can play an important role in change detection during shift-contingent change blindness.
Collapse
|
10
|
Nonspecific competition underlies transient attention. PSYCHOLOGICAL RESEARCH 2014; 79:844-60. [PMID: 25187215 DOI: 10.1007/s00426-014-0605-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2014] [Accepted: 08/21/2014] [Indexed: 10/24/2022]
Abstract
Cueing a target by abrupt visual stimuli enhances its perception in a rapid but short-lived fashion, an effect known as transient attention. Our recent study showed that when targets are cued at a constant, central location, the emergence of the transient performance pattern was dependent on the presence of competing distractors, whereas targets presented in isolation were enhanced in a sustained manner (Wilschut et al., PLoS ONE, 6:e27661, 2011). The current study examined in more detail whether the transience depends on the specific nature of the competition. We first replicated and extended the competition-dependent transient pattern for peripheral and variable target locations. We then investigated the role of feature similarity, compatibility, and proximity. Both competition by feature similarity and compatibility between the target and distractors were found to impair performance, but effects were additive with the effects of the cueing interval and did not change the transient performance function. Varying the spatial distance between target and distractors yielded mixed evidence, but here too a transient pattern could be observed for targets flanked by both close and far distractors. The results thus show that the presence or absence of competition determines whether attention appears transient or sustained, while the specific nature of the competition (in terms of location or feature) affects selection independent of time.
Collapse
|
11
|
Abstract
Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition.
Collapse
Affiliation(s)
- Jedediah M Singer
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
12
|
Integration of motion responses underlying directional motion anisotropy in human early visual cortical areas. PLoS One 2013; 8:e67468. [PMID: 23840711 PMCID: PMC3696083 DOI: 10.1371/journal.pone.0067468] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2013] [Accepted: 05/17/2013] [Indexed: 11/19/2022] Open
Abstract
Recent imaging studies have reported directional motion biases in human visual cortex when perceiving moving random dot patterns. It has been hypothesized that these biases occur as a result of the integration of motion detector activation along the path of motion in visual cortex. In this study we investigate the nature of such motion integration with functional MRI (fMRI) using different motion stimuli. Three types of moving random dot stimuli were presented, showing either coherent motion, motion with spatial decorrelations or motion with temporal decorrelations. The results from the coherent motion stimulus reproduced the centripetal and centrifugal directional motion biases in V1, V2 and V3 as previously reported. The temporally decorrelated motion stimulus resulted in both centripetal and centrifugal biases similar to coherent motion. In contrast, the spatially decorrelated motion stimulus resulted in small directional motion biases that were only present in parts of visual cortex coding for higher eccentricities of the visual field. In combination with previous results, these findings indicate that biased motion responses in early visual cortical areas most likely depend on the spatial integration of a simultaneously activated motion detector chain.
Collapse
|
13
|
Lin Z, He S. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking. J Vis 2012; 12:24. [PMID: 23104817 PMCID: PMC3587025 DOI: 10.1167/12.11.24] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2012] [Accepted: 09/26/2012] [Indexed: 11/24/2022] Open
Abstract
Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.
Collapse
Affiliation(s)
- Zhicheng Lin
- University of Minnesota, Minneapolis, MN, USA
- University of Washington, Seattle, WA, USA
| | - Sheng He
- University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
14
|
|
15
|
Nonretinotopic processing is related to postdictive size modulation in apparent motion. Atten Percept Psychophys 2011; 73:1522-31. [PMID: 21472506 DOI: 10.3758/s13414-011-0128-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The present study briefly examined how the perceived size of a leading flash would be modulated by trailing motion signals. Observers were presented with two vertical green bars that were followed by white bars with different lengths, which were presented at different locations from the green bars. The task of observers was to discriminate which of the green bars was shorter than the other (Experiment 1) or whether the lengths of the green bars were equal or not (Experiments 2 and 3). One green bar producing apparent motion with the following shorter white bar was reported to be shorter than the other green bar producing apparent motion with the following longer white bar, not only when motion correspondence was determined on the basis of retinal proximity (Experiments 1 and 2),but also when motion correspondence was determined on the basis of nonretinotopic information-that is, a relative location within each perceptual group of bars (Experiment 3). These results indicate that motion processing involving object updating or motion deblurring in the nonretinotopic frame of reference is related to postdictive size modulation.
Collapse
|
16
|
Aydın M, Herzog MH, Oğmen H. Barrier effects in non-retinotopic feature attribution. Vision Res 2011; 51:1861-71. [PMID: 21767561 DOI: 10.1016/j.visres.2011.06.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2010] [Revised: 06/17/2011] [Accepted: 06/27/2011] [Indexed: 11/30/2022]
Abstract
When objects move in the environment, their retinal images can undergo drastic changes and features of different objects can be inter-mixed in the retinal image. Notwithstanding these changes and ambiguities, the visual system is capable of establishing correctly feature-object relationships as well as maintaining individual identities of objects through space and time. Recently, by using a Ternus-Pikler display, we have shown that perceived motion correspondences serve as the medium for non-retinotopic attribution of features to objects. The purpose of the work reported in this manuscript was to assess whether perceived motion correspondences provide a sufficient condition for feature attribution. Our results show that the introduction of a static "barrier" stimulus can interfere with the feature attribution process. Our results also indicate that the barrier stops feature attribution based on interferences related to the feature attribution process itself rather than on mechanisms related to perceived motion.
Collapse
Affiliation(s)
- Murat Aydın
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77024-4005, USA
| | | | | |
Collapse
|
17
|
Boi M, Oğmen H, Herzog MH. Motion and tilt aftereffects occur largely in retinal, not in object, coordinates in the Ternus-Pikler display. J Vis 2011; 11:11.3.7. [PMID: 21389102 DOI: 10.1167/11.3.7] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Recent studies have shown that a variety of aftereffects occurs in a non-retinotopic frame of reference. These findings have been taken as strong evidence that remapping of visual information occurs in a hierarchic manner in the human cortex with an increasing magnitude from early to higher levels. Other studies, however, failed to find non-retinotopic aftereffects. These experiments all relied on paradigms involving eye movements. Recently, we have developed a new paradigm, based on the Ternus-Pikler display, which tests retinotopic vs. non-retinotopic processing without the involvement of eye movements. Using this paradigm, we found strong evidence that attention, form, and motion processing can occur in a non-retinotopic frame of reference. Here, we show that motion and tilt aftereffects are largely retinotopic.
Collapse
Affiliation(s)
- Marco Boi
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Switzerland.
| | | | | |
Collapse
|
18
|
|
19
|
Otto TU, Oğmen H, Herzog MH. Perceptual learning in a nonretinotopic frame of reference. Psychol Sci 2010; 21:1058-63. [PMID: 20585052 DOI: 10.1177/0956797610376074] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Perceptual learning is the ability to improve perception through practice. Perceptual learning is usually specific for the task and features learned. For example, improvements in performance for a certain stimulus do not transfer if the stimulus is rotated by 90 degrees or is presented at a different location. These findings are usually taken as evidence that orientation-specific, retinotopic encoding processes are changed during training. In this study, we used a novel masking paradigm in which the offset in an invisible, oblique vernier stimulus was perceived in an aligned vertical or horizontal flanking stimulus presented at a different location. Our results show that learning is specific for the perceived orientation of the vernier offset but not for its actual orientation and location. Specific encoding processes cannot be invoked to explain this improvement. We propose that perceptual learning involves changes in nonretinotopic, attentional readout processes.
Collapse
Affiliation(s)
- Thomas U Otto
- 1Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL).
| | | | | |
Collapse
|
20
|
Terao M, Watanabe J, Yagi A, Nishida S. Smooth pursuit eye movements improve temporal resolution for color perception. PLoS One 2010; 5:e11214. [PMID: 20574511 PMCID: PMC2888568 DOI: 10.1371/journal.pone.0011214] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2010] [Accepted: 05/28/2010] [Indexed: 11/19/2022] Open
Abstract
Human observers see a single mixed color (yellow) when different colors (red and green) rapidly alternate. Accumulating evidence suggests that the critical temporal frequency beyond which chromatic fusion occurs does not simply reflect the temporal limit of peripheral encoding. However, it remains poorly understood how the central processing controls the fusion frequency. Here we show that the fusion frequency can be elevated by extra-retinal signals during smooth pursuit. This eye movement can keep the image of a moving target in the fovea, but it also introduces a backward retinal sweep of the stationary background pattern. We found that the fusion frequency was higher when retinal color changes were generated by pursuit-induced background motions than when the same retinal color changes were generated by object motions during eye fixation. This temporal improvement cannot be ascribed to a general increase in contrast gain of specific neural mechanisms during pursuit, since the improvement was not observed with a pattern flickering without changing position on the retina or with a pattern moving in the direction opposite to the background motion during pursuit. Our findings indicate that chromatic fusion is controlled by a cortical mechanism that suppresses motion blur. A plausible mechanism is that eye-movement signals change spatiotemporal trajectories along which color signals are integrated so as to reduce chromatic integration at the same locations (i.e., along stationary trajectories) on the retina that normally causes retinal blur during fixation.
Collapse
Affiliation(s)
- Masahiko Terao
- NTT Communication Science Laboratories, NTT Corporation, Kyoto, Japan.
| | | | | | | |
Collapse
|
21
|
Otto TU, Ogmen H, Herzog MH. Feature integration across space, time, and orientation. J Exp Psychol Hum Percept Perform 2010; 35:1670-86. [PMID: 19968428 DOI: 10.1037/a0015798] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases-presumably because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space-based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time.
Collapse
Affiliation(s)
- Thomas U Otto
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Switzerland.
| | | | | |
Collapse
|
22
|
Öğmen H, Herzog MH. The Geometry of Visual Perception: Retinotopic and Non-retinotopic Representations in the Human Visual System. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2010; 98:479-492. [PMID: 22334763 PMCID: PMC3277856 DOI: 10.1109/jproc.2009.2039028] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Geometry is closely linked to visual perception; yet, very little is known about the geometry of visual processing beyond early retinotopic organization. We present a variety of perceptual phenomena showing that a retinotopic representation is neither sufficient nor necessary to support form perception. We discuss the popular "object files" concept as a candidate for non-retinotopic representations and, based on its shortcomings, suggest future directions for research using local manifold representations. We suggest that these manifolds are created by the emergence of dynamic reference-frames that result from motion segmentation. We also suggest that the metric of these manifolds is based on relative motion vectors.
Collapse
Affiliation(s)
- Haluk Öğmen
- Department of Electrical & Computer Engineering and Center for NeuroEngineering & Cognitive Science, University of Houston, Houston, TX 77204-4005 USA (phone: 713-743-4428; fax: 713-743-4444
| | - Michael H. Herzog
- Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
| |
Collapse
|
23
|
Plomp G, Mercier MR, Otto TU, Blanke O, Herzog MH. Non-retinotopic feature integration decreases response-locked brain activity as revealed by electrical neuroimaging. Neuroimage 2009; 48:405-14. [DOI: 10.1016/j.neuroimage.2009.06.031] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2009] [Revised: 06/08/2009] [Accepted: 06/09/2009] [Indexed: 11/16/2022] Open
|
24
|
Holcombe AO. Seeing slow and seeing fast: two limits on perception. Trends Cogn Sci 2009; 13:216-21. [PMID: 19386535 DOI: 10.1016/j.tics.2009.02.005] [Citation(s) in RCA: 111] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2008] [Revised: 02/18/2009] [Accepted: 02/24/2009] [Indexed: 10/20/2022]
|