1
|
White PA. The perceptual timescape: Perceptual history on the sub-second scale. Cogn Psychol 2024; 149:101643. [PMID: 38452720 DOI: 10.1016/j.cogpsych.2024.101643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/09/2024]
Abstract
There is a high-capacity store of brief time span (∼1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, Wales CF10 3YG, United Kingdom.
| |
Collapse
|
2
|
Rampone G, Makin ADJ, Tyson-Carr J, Bertamini M. Spinning objects and partial occlusion: Smart neural responses to symmetry. Vision Res 2021; 188:1-9. [PMID: 34271291 DOI: 10.1016/j.visres.2021.06.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 06/04/2021] [Accepted: 06/19/2021] [Indexed: 11/18/2022]
Abstract
In humans, extrastriate visual areas are strongly activated by symmetry. However, perfect symmetry is rare in natural visual images. Recent findings showed that when parts of a symmetric shape are presented at different points in time the process relies on a perceptual memory buffer. Does this temporal integration need a retinotopic reference frame? For the first time we tested integration of parts both in the temporal and spatial domain, using a non-retinotopic frame of reference. In Experiment 1, an irregular polygonal shape (either symmetric or asymmetric) was partly occluded by a rectangle for 500 ms (T1). The rectangle moved to the opposite side to reveal the other half of the shape, whilst occluding the previously visible half (T2). The reference frame for the object was static: the two parts stimulated retinotopically corresponding receptive fields (revealed over time). A symmetry-specific ERP response from ~300 ms after T2 was observed. In Experiment 2 dynamic occlusion was combined with an additional step at T2: the new half-shape and occluder were rotated by 90°. Therefore, there was a moving frame of reference and the retinal correspondence between the two parts was disrupted. A weaker but significant symmetry-specific response was recorded. This result extends previous findings: global symmetry representation can be achieved in extrastriate areas non-retinotopically, through integration in both temporal and spatial domain.
Collapse
Affiliation(s)
- Giulia Rampone
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK.
| | - Alexis D J Makin
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK
| | - John Tyson-Carr
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK
| | - Marco Bertamini
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK; Department of General Psychology, Via Venezia, 8 - 35131, University of Padova, Padova, Italy
| |
Collapse
|
3
|
Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views. J Neurosci 2017; 38:659-678. [PMID: 29196319 DOI: 10.1523/jneurosci.1318-17.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2017] [Revised: 11/13/2017] [Accepted: 11/15/2017] [Indexed: 11/21/2022] Open
Abstract
We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept.SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on the retina. The lateral occipital complex (LOC) represents shape faithfully in such conditions even if the object is partially occluded. However, shape must sometimes be reconstructed over both space and time. Such is the case in anorthoscopic perception, when an object is moving behind a narrow slit. In this scenario, spatial information is limited at any moment so the whole-shape percept can only be inferred by integration of successive shape views over time. We find that LOC carries shape-specific information recovered using such temporal integration processes. The shape representation is invariant to slit orientation and is similar to that evoked by a fully viewed image. Existing models of object recognition lack such capabilities.
Collapse
|
4
|
Öğmen H, Herzog MH. A New Conceptualization of Human Visual Sensory-Memory. Front Psychol 2016; 7:830. [PMID: 27375519 PMCID: PMC4899472 DOI: 10.3389/fpsyg.2016.00830] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Accepted: 05/18/2016] [Indexed: 11/16/2022] Open
Abstract
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.
Collapse
Affiliation(s)
- Haluk Öğmen
- Department of Electrical and Computer Engineering, University of HoustonHouston, TX, USA; Center for Neuro-Engineering and Cognitive Science, University of HoustonHouston, TX, USA
| | - Michael H Herzog
- Laboratory of Psychophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL) Lausanne, Switzerland
| |
Collapse
|
5
|
Abstract
A reference frame is required to specify how motion is perceived. For example, the motion of part of an object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38, 379-393, 1976) proposed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means of which the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset of solutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set of psychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects of motion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective of the reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames.
Collapse
|
6
|
Bachmann T. Neurobiological mechanisms behind the spatiotemporal illusions of awareness used for advocating prediction or postdiction. Front Psychol 2013; 3:593. [PMID: 23293625 PMCID: PMC3537166 DOI: 10.3389/fpsyg.2012.00593] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2012] [Accepted: 12/16/2012] [Indexed: 11/13/2022] Open
Abstract
The fact that it takes time for the brain to process information from the changing environment underlies many experimental phenomena of awareness of spatiotemporal events, including a number of astonishing illusions. These phenomena have been explained from the predictive and postdictive theoretical perspectives. Here I describe the most extensively studied phenomena in order to see how well the two perspectives can explain them. Next, the neurobiological perceptual retouch mechanism of producing stimulation awareness is characterized and its work in causing the listed illusions is described. A perspective on how brain mechanisms of conscious perception produce the phenomena supportive of the postdictive view is presented in this article. At the same time, some of the phenomena cannot be explained by the traditional postdictive account, but can be interpreted from the perceptual retouch theory perspective.
Collapse
Affiliation(s)
- Talis Bachmann
- Laboratory of Cognitive Neuroscience, Institute of Public Law, University of Tartu (Tallinn branch)Tartu, Estonia
| |
Collapse
|
7
|
Ağaoğlu MN, Herzog MH, Oğmen H. Non-retinotopic feature processing in the absence of retinotopic spatial layout and the construction of perceptual space from motion. Vision Res 2012; 71:10-7. [PMID: 22929811 DOI: 10.1016/j.visres.2012.08.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2012] [Revised: 07/18/2012] [Accepted: 08/14/2012] [Indexed: 11/29/2022]
Abstract
The spatial representation of a visual scene in the early visual system is well known. The optics of the eye map the three-dimensional environment onto two-dimensional images on the retina. These retinotopic representations are preserved in the early visual system. Retinotopic representations and processing are among the most prevalent concepts in visual neuroscience. However, it has long been known that a retinotopic representation of the stimulus is neither sufficient nor necessary for perception. Saccadic Stimulus Presentation Paradigm and the Ternus-Pikler displays have been used to investigate non-retinotopic processes with and without eye movements, respectively. However, neither of these paradigms eliminates the retinotopic representation of the spatial layout of the stimulus. Here, we investigated how stimulus features are processed in the absence of a retinotopic layout and in the presence of retinotopic conflict. We used anorthoscopic viewing (slit viewing) and pitted a retinotopic feature-processing hypothesis against a non-retinotopic feature-processing hypothesis. Our results support the predictions of the non-retinotopic feature-processing hypothesis and demonstrate the ability of the visual system to operate non-retinotopically at a fine feature processing level in the absence of a retinotopic spatial layout. Our results suggest that perceptual space is actively constructed from the perceptual dimension of motion. The implications of these findings for normal ecological viewing conditions are discussed.
Collapse
Affiliation(s)
- Mehmet N Ağaoğlu
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77024-4005, USA
| | | | | |
Collapse
|
8
|
Otto TU, Ogmen H, Herzog MH. Feature integration across space, time, and orientation. J Exp Psychol Hum Percept Perform 2010; 35:1670-86. [PMID: 19968428 DOI: 10.1037/a0015798] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases-presumably because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space-based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time.
Collapse
Affiliation(s)
- Thomas U Otto
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne, Switzerland.
| | | | | |
Collapse
|
9
|
Öğmen H, Herzog MH. The Geometry of Visual Perception: Retinotopic and Non-retinotopic Representations in the Human Visual System. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2010; 98:479-492. [PMID: 22334763 PMCID: PMC3277856 DOI: 10.1109/jproc.2009.2039028] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Geometry is closely linked to visual perception; yet, very little is known about the geometry of visual processing beyond early retinotopic organization. We present a variety of perceptual phenomena showing that a retinotopic representation is neither sufficient nor necessary to support form perception. We discuss the popular "object files" concept as a candidate for non-retinotopic representations and, based on its shortcomings, suggest future directions for research using local manifold representations. We suggest that these manifolds are created by the emergence of dynamic reference-frames that result from motion segmentation. We also suggest that the metric of these manifolds is based on relative motion vectors.
Collapse
Affiliation(s)
- Haluk Öğmen
- Department of Electrical & Computer Engineering and Center for NeuroEngineering & Cognitive Science, University of Houston, Houston, TX 77204-4005 USA (phone: 713-743-4428; fax: 713-743-4444
| | - Michael H. Herzog
- Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
| |
Collapse
|
10
|
Plomp G, Mercier MR, Otto TU, Blanke O, Herzog MH. Non-retinotopic feature integration decreases response-locked brain activity as revealed by electrical neuroimaging. Neuroimage 2009; 48:405-14. [DOI: 10.1016/j.neuroimage.2009.06.031] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2009] [Revised: 06/08/2009] [Accepted: 06/09/2009] [Indexed: 11/16/2022] Open
|
11
|
Abstract
When a figure moves behind a stationary narrow slit, observers often report seeing the figure as a whole, a phenomenon called slit viewing or anorthoscopic perception. Interestingly, in slit viewing, the figure is perceived compressed along the axis of motion. As with other perceptual distortions, it is unclear whether the perceptual space in the vicinity of the slit or the representation of the figure itself undergoes compression. In a psychophysical experiment, we tested these two hypotheses. We found that the percept of a stationary bar, presented within the slit, was not distorted even when at the same time a circle underwent compression by moving through the slit. This result suggests that the compression of form results from figural rather than from space compression. In support of this hypothesis, we found that when the bar was perceptually grouped with the circle, the bar appeared compressed. Our results show that, in slit viewing, the distortion occurs at a non-retinotopic level where grouped objects are jointly represented.
Collapse
Affiliation(s)
- Murat Aydin
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77024-4005, USA.
| | | | | |
Collapse
|