1
|
Kim T, Pasupathy A. Neural correlates of crowding in macaque area V4. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.16.562617. [PMID: 37905025 PMCID: PMC10614871 DOI: 10.1101/2023.10.16.562617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Visual crowding refers to the phenomenon where a target object that is easily identifiable in isolation becomes difficult to recognize when surrounded by other stimuli (distractors). Extensive psychophysical studies support two alternative possibilities for the underlying mechanisms. One hypothesis suggests that crowding results from the loss of visual information due to pooled encoding of multiple nearby stimuli in the mid-level processing stages along the ventral visual pathway. Alternatively, crowding may arise from limited resolution in decoding object information during recognition and the encoded information may remain inaccessible unless it is salient. To rigorously test these alternatives, we studied the responses of single neurons in macaque area V4, an intermediate stage of the ventral, object-processing pathway, to parametrically designed crowded displays and their texture-statistics matched metameric counterparts. Our investigations reveal striking parallels between how crowding parameters, e.g., number, distance, and position of distractors, influence human psychophysical performance and V4 shape selectivity. Importantly, we found that enhancing the salience of a target stimulus could reverse crowding effects even in highly cluttered scenes and such reversals could be protracted reflecting a dynamical process. Overall, we conclude that a pooled encoding of nearby stimuli cannot explain the observed responses and we propose an alternative model where V4 neurons preferentially encode salient stimuli in crowded displays.
Collapse
Affiliation(s)
- Taekjun Kim
- Department of Biological Structure, University of Washington, Seattle, WA 98195
- Washington National Primate Research Center, University of Washington, Seattle, WA 98195
| | - Anitha Pasupathy
- Department of Biological Structure, University of Washington, Seattle, WA 98195
- Washington National Primate Research Center, University of Washington, Seattle, WA 98195
| |
Collapse
|
2
|
Nedimović P, Zdravković S, Domijan D. Empirical evaluation of computational models of lightness perception. Sci Rep 2022; 12:22039. [PMID: 36543784 PMCID: PMC9772371 DOI: 10.1038/s41598-022-22395-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 10/13/2022] [Indexed: 12/24/2022] Open
Abstract
Lightness of a surface depends not only on its physical characteristics, but also on the properties of the surrounding context. As a result, varying the context can significantly alter surface lightness, an effect exploited in many lightness illusions. Computational models can produce outcomes similar to human illusory percepts, allowing for demonstrable assessment of the applied mechanisms and principles. We tested 8 computational models on 13 typical displays used in lightness research (11 Illusions and 2 Mondrians), and compared them with results from human participants (N = 85). Results show that HighPass and MIR models predict empirical results for simultaneous lightness contrast (SLC) and its close variations. ODOG and its newer variants (ODOG-2 and L-ODOG) in addition to SLC displays were able to predict effect of White's illusion. RETINEX was able to predict effects of both SLC displays and Dungeon illusion. Dynamic decorrelation model was able to predict obtained effects for all tested stimuli except two SLC variations. Finally, FL-ODOG model was best at simulating human data, as it was able to predict empirical results for all displays, bar the Reversed contrast illusion. Finally, most models underperform on the Mondrian displays that represent most natural stimuli for the human visual system.
Collapse
Affiliation(s)
- Predrag Nedimović
- Laboratory for Experimental Psychology, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia.
| | - Sunčica Zdravković
- Laboratory for Experimental Psychology, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia
- Laboratory for Experimental Psychology, Department of Psychology, Faculty of Philosophy, University of Novi Sad, Novi Sad, Serbia
| | - Dražen Domijan
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| |
Collapse
|
3
|
Choung OH, Gordillo D, Roinishvili M, Brand A, Herzog MH, Chkonia E. Intact and deficient contextual processing in schizophrenia patients. Schizophr Res Cogn 2022; 30:100265. [PMID: 36119400 PMCID: PMC9477851 DOI: 10.1016/j.scog.2022.100265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 07/09/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022] Open
Abstract
Schizophrenia patients are known to have deficits in contextual vision. However, results are often very mixed. In some paradigms, patients do not take the context into account and, hence, perform more veridically than healthy controls. In other paradigms, context deteriorates performance much more strongly in patients compared to healthy controls. These mixed results may be explained by differences in the paradigms as well as by small or biased samples, given the large heterogeneity of patients' deficits. Here, we show that mixed results may also come from idiosyncrasies of the stimuli used because in variants of the same visual paradigm, tested with the same participants, we found intact and deficient processing.
Collapse
Affiliation(s)
- Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Corresponding author. http://lpsy.epfl.ch
| | - Dario Gordillo
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maya Roinishvili
- Laboratory of Vision Physiology, Ivane Beritashvili Centre of Experimental Biomedicine, Tbilisi, Georgia
- Institute of Cognitive Neurosciences, Free University of Tbilisi, Tbilisi, Georgia
| | - Andreas Brand
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Eka Chkonia
- Department of Psychiatry, Tbilisi State Medical University, Tbilisi, Georgia
| |
Collapse
|
4
|
Herzog MH. The Irreducibility of Vision: Gestalt, Crowding and the Fundamentals of Vision. Vision (Basel) 2022; 6:vision6020035. [PMID: 35737422 PMCID: PMC9228288 DOI: 10.3390/vision6020035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 05/25/2022] [Accepted: 05/31/2022] [Indexed: 11/16/2022] Open
Abstract
What is fundamental in vision has been discussed for millennia. For philosophical realists and the physiological approach to vision, the objects of the outer world are truly given, and failures to perceive objects properly, such as in illusions, are just sporadic misperceptions. The goal is to replace the subjectivity of the mind by careful physiological analyses. Continental philosophy and the Gestaltists are rather skeptical or ignorant about external objects. The percepts themselves are their starting point, because it is hard to deny the truth of one own′s percepts. I will show that, whereas both approaches can well explain many visual phenomena with classic visual stimuli, they both have trouble when stimuli become slightly more complex. I suggest that these failures have a deeper conceptual reason, namely that their foundations (objects, percepts) do not hold true. I propose that only physical states exist in a mind independent manner and that everyday objects, such as bottles and trees, are perceived in a mind-dependent way. The fundamental processing units to process objects are extended windows of unconscious processing, followed by short, discrete conscious percepts.
Collapse
Affiliation(s)
- Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| |
Collapse
|
5
|
Hu L, Zhao C, Wei L, Talhelm T, Wang C, Zhang X. How do humans group non-rigid objects in multiple object tracking?: Evidence from grouping by self-rotation. Br J Psychol 2021; 113:653-676. [PMID: 34921401 DOI: 10.1111/bjop.12547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 12/03/2021] [Indexed: 11/28/2022]
Abstract
Previous studies on perceptual grouping found that people can use spatiotemporal and featural information to group spatially separated rigid objects into a unit while tracking moving objects. However, few studies have tested the role of objects' self-motion information in perceptual grouping, although it is of great significance to the motion perception in the three-dimensional space. In natural environments, objects always move in translation and rotation at the same time. The self-rotation of the objects seriously destroys objects' rigidity and topology, creates conflicting movement signals and results in crowding effects. Thus, this study sought to examine the specific role played by self-rotation information on grouping spatially separated non-rigid objects through a modified multiple object tracking (MOT) paradigm with self-rotating objects. Experiment 1 found that people could use self-rotation information to group spatially separated non-rigid objects, even though this information was deleterious for attentive tracking and irrelevant to the task requirements, and people seemed to use it strategically rather than automatically. Experiment 2 provided stronger evidence that this grouping advantage did come from the self-rotation per se rather than surface-level cues arising from self-rotation (e.g. similar 2D motion signals and common shapes). Experiment 3 changed the stimuli to more natural 3D cubes to strengthen the impression of self-rotation and again found that self-rotation improved grouping. Finally, Experiment 4 demonstrated that grouping by self-rotation and grouping by changing shape were statistically comparable but additive, suggesting that they were two different sources of the object information. Thus, grouping by self-rotation mainly benefited from the perceptual differences in motion flow fields rather than in deformation. Overall, this study is the first attempt to identify self-motion as a new feature that people can use to group objects in dynamic scenes and shed light on debates about what entities/units we group and what kinds of information about a target we process while tracking objects.
Collapse
Affiliation(s)
- Luming Hu
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, National Demonstration Center for Experimental Psychology Education, Beijing Normal University, Beijing, China
| | - Chen Zhao
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, National Demonstration Center for Experimental Psychology Education, Beijing Normal University, Beijing, China
| | - Liuqing Wei
- Department of Psychology, Institute of Education, Hubei University, Wuhan, China
| | - Thomas Talhelm
- Booth School of Business, University of Chicago, Chicago, Illinois, USA
| | - Chundi Wang
- Department of Psychology and Research Centre of Aeronautic Psychology and Behavior, Beihang University, Beijing, China
| | - Xuemin Zhang
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, National Demonstration Center for Experimental Psychology Education, Beijing Normal University, Beijing, China.,State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China.,Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, China
| |
Collapse
|
6
|
Abstract
In crowding, perception of a target deteriorates in the presence of nearby flankers. Surprisingly, perception can be rescued from crowding if additional flankers are added (uncrowding). Uncrowding is a major challenge for all classic models of crowding and vision in general, because the global configuration of the entire stimulus is crucial. However, it is unclear which characteristics of the configuration impact (un)crowding. Here, we systematically dissected flanker configurations and showed that (un)crowding cannot be easily explained by the effects of the sub-parts or low-level features of the stimulus configuration. Our modeling results suggest that (un)crowding requires global processing. These results are well in line with previous studies showing the importance of global aspects in crowding.
Collapse
Affiliation(s)
- Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
7
|
Unraveling brain interactions in vision: The example of crowding. Neuroimage 2021; 240:118390. [PMID: 34271157 DOI: 10.1016/j.neuroimage.2021.118390] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 07/09/2021] [Accepted: 07/12/2021] [Indexed: 11/22/2022] Open
Abstract
Crowding, the impairment of target discrimination in clutter, is the standard situation in vision. Traditionally, crowding is explained with (feedforward) models, in which only neighboring elements interact, leading to a "bottleneck" at the earliest stages of vision. It is with this implicit prior that most functional magnetic resonance imaging (fMRI) studies approach the identification of the "neural locus" of crowding, searching for the earliest visual area in which the blood-oxygenation-level-dependent (BOLD) signal is suppressed under crowded conditions. Using this classic approach, we replicated previous findings of crowding-related BOLD suppression starting in V2 and increasing up the visual hierarchy. Surprisingly, under conditions of uncrowding, in which adding flankers improves performance, the BOLD signal was further suppressed. This suggests an important role for top-down connections, which is in line with global models of crowding. To discriminate between various possible models, we used dynamic causal modeling (DCM). We show that recurrent interactions between all visual areas, including higher-level areas like V4 and the lateral occipital complex (LOC), are crucial in crowding and uncrowding. Our results explain the discrepancies in previous findings: in a recurrent visual hierarchy, the crowding effect can theoretically be detected at any stage. Beyond crowding, we demonstrate the need for models like DCM to understand the complex recurrent processing which most likely underlies human perception in general.
Collapse
|
8
|
Bornet A, Doerig A, Herzog MH, Francis G, Van der Burg E. Shrinking Bouma's window: How to model crowding in dense displays. PLoS Comput Biol 2021; 17:e1009187. [PMID: 34228703 PMCID: PMC8284675 DOI: 10.1371/journal.pcbi.1009187] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 07/16/2021] [Accepted: 06/16/2021] [Indexed: 11/22/2022] Open
Abstract
In crowding, perception of a target deteriorates in the presence of nearby flankers. Traditionally, it is thought that visual crowding obeys Bouma's law, i.e., all elements within a certain distance interfere with the target, and that adding more elements always leads to stronger crowding. Crowding is predominantly studied using sparse displays (a target surrounded by a few flankers). However, many studies have shown that this approach leads to wrong conclusions about human vision. Van der Burg and colleagues proposed a paradigm to measure crowding in dense displays using genetic algorithms. Displays were selected and combined over several generations to maximize human performance. In contrast to Bouma's law, only the target's nearest neighbours affected performance. Here, we tested various models to explain these results. We used the same genetic algorithm, but instead of selecting displays based on human performance we selected displays based on the model's outputs. We found that all models based on the traditional feedforward pooling framework of vision were unable to reproduce human behaviour. In contrast, all models involving a dedicated grouping stage explained the results successfully. We show how traditional models can be improved by adding a grouping stage.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, Indiana, United States of America
| | - Erik Van der Burg
- TNO, Human Factors, Soesterberg, The Netherlands
- Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Dekel R, Sagi D. Interaction of contexts in context-dependent orientation estimation. Vision Res 2020; 169:58-72. [PMID: 32179340 DOI: 10.1016/j.visres.2020.02.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 02/26/2020] [Accepted: 02/27/2020] [Indexed: 10/24/2022]
Abstract
The processing of a visual stimulus is known to be influenced by the statistics in recent visual history and by the stimulus' visual surround. Such contextual influences lead to perceptually salient phenomena, such as the tilt aftereffect and the tilt illusion. Despite much research on the influence of an isolated context, it is not clear how multiple, possibly competing sources of contextual influence interact. Here, using psychophysical methods, we compared the combined influence of multiple contexts to the sum of the isolated context influences. The results showed large deviations from linear additivity for adjacent or overlapping contexts, and remarkably, clear additivity when the contexts were sufficiently separated. Specifically, for adjacent or overlapping contexts, the combined effect was often lower than the sum of the isolated component effects (sub-additivity), or was more influenced by one component than another (selection). For contexts that were separated in time (600 ms), the combined effect measured the exact sum of the isolated component effects (in degrees of bias). Overall, the results imply an initial compressive transformation during visual processing, followed by selection between the processed parts.
Collapse
Affiliation(s)
- Ron Dekel
- Department of Neurobiology, The Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Dov Sagi
- Department of Neurobiology, The Weizmann Institute of Science, Rehovot 7610001, Israel.
| |
Collapse
|
10
|
Bornet A, Kaiser J, Kroner A, Falotico E, Ambrosano A, Cantero K, Herzog MH, Francis G. Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision - The Case of Visual Crowding. Front Neurorobot 2019; 13:33. [PMID: 31191291 PMCID: PMC6549494 DOI: 10.3389/fnbot.2019.00033] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/14/2019] [Indexed: 11/13/2022] Open
Abstract
Traditionally, human vision research has focused on specific paradigms and proposed models to explain very specific properties of visual perception. However, the complexity and scope of modern psychophysical paradigms undermine the success of this approach. For example, perception of an element strongly deteriorates when neighboring elements are presented in addition (visual crowding). As it was shown recently, the magnitude of deterioration depends not only on the directly neighboring elements but on almost all elements and their specific configuration. Hence, to fully explain human visual perception, one needs to take large parts of the visual field into account and combine all the aspects of vision that become relevant at such scale. These efforts require sophisticated and collaborative modeling. The Neurorobotics Platform (NRP) of the Human Brain Project offers a unique opportunity to connect models of all sorts of visual functions, even those developed by different research groups, into a coherently functioning system. Here, we describe how we used the NRP to connect and simulate a segmentation model, a retina model, and a saliency model to explain complex results about visual perception. The combination of models highlights the versatility of the NRP and provides novel explanations for inward-outward anisotropy in visual crowding.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Alexander Kroner
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Italy
| | | | | | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
11
|
Jäkel F, Singh M, Wichmann FA, Herzog MH. An overview of quantitative approaches in Gestalt perception. Vision Res 2016; 126:3-8. [PMID: 27353224 DOI: 10.1016/j.visres.2016.06.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 06/22/2016] [Accepted: 06/22/2016] [Indexed: 10/21/2022]
Abstract
Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory.
Collapse
Affiliation(s)
- Frank Jäkel
- Institute of Cognitive Science, University of Osnabrück, Germany.
| | - Manish Singh
- Department of Psychology and Center for Cognitive Science, Rutgers University, New Brunswick, NJ, United States
| | - Felix A Wichmann
- Neural Information Processing Group, Faculty of Science, and Bernstein Center for Computational Neuroscience Tübingen, University of Tübingen, Germany; Max Planck Institute for Intelligent Systems, Empirical Inference Department, Tübingen, Germany
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| |
Collapse
|
12
|
Abstract
A reference frame is required to specify how motion is perceived. For example, the motion of part of an object is usually perceived relative to the motion of the object itself. Johansson (Psychological Research, 38, 379-393, 1976) proposed that the perceptual system carries out a vector decomposition, which rewsults in common and relative motion percepts. Because vector decomposition is an ill-posed problem, several studies have introduced constraints by means of which the number of solutions can be substantially reduced. Here, we have adopted an alternative approach and studied how, rather than why, a subset of solutions is selected by the visual system. We propose that each retinotopic motion vector creates a reference-frame field in the retinotopic space, and that the fields created by different motion vectors interact in order to determine a motion vector that will serve as the reference frame at a given point and time in space. To test this theory, we performed a set of psychophysical experiments. The field-like influence of motion-based reference frames was manifested by increased nonspatiotopic percepts of the backward motion of a target square with decreasing distance from a drifting grating. We then sought to determine whether these field-like effects of motion-based reference frames can also be extended to stationary landmarks. The results suggest that reference-field interactions occur only between motion-generated fields. Finally, we investigated whether and how different reference fields interact with each other, and found that different reference-field interactions are nonlinear and depend on how the motion vectors are grouped. These findings are discussed from the perspective of the reference-frame metric field (RFMF) theory, according to which perceptual grouping operations play a central and essential role in determining the prevailing reference frames.
Collapse
|