1
|
Nakayama R, Tanaka M, Kishi Y, Murakami I. Aftereffect of perceived motion trajectories. iScience 2024; 27:109626. [PMID: 38623326 PMCID: PMC11016753 DOI: 10.1016/j.isci.2024.109626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/07/2024] [Accepted: 03/26/2024] [Indexed: 04/17/2024] Open
Abstract
If our visual system has a distinct computational process for motion trajectories, such a process may minimize redundancy and emphasize variation in object trajectories by adapting to the current statistics. Our experiments show that after adaptation to multiple objects traveling along trajectories with a common tilt, the trajectory of an object was perceived as tilting on the repulsive side. This trajectory aftereffect occurred irrespective of whether the tilt of the adapting stimulus was physical or an illusion from motion-induced position shifts and did not differ in size across the physical and illusory conditions. Moreover, when the perceived and physical tilts competed during adaptation, the trajectory aftereffect depended on the perceived tilt. The trajectory aftereffect transferred between hemifields and was not explained by motion-insensitive orientation adaptation or attention. These findings provide evidence for a trajectory-specific adaptable process that depends on higher-order representations after the integration of position and motion signals.
Collapse
Affiliation(s)
- Ryohei Nakayama
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| | - Mai Tanaka
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| | - Yukino Kishi
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| | - Ikuya Murakami
- Department of Psychology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku 113-0033, Tokyo, Japan
| |
Collapse
|
2
|
Luu L, Zhang M, Tsodyks M, Qian N. Cross-fixation interactions of orientations suggest high-to-low-level decoding in visual working memory. Vision Res 2021; 190:107963. [PMID: 34784534 DOI: 10.1016/j.visres.2021.107963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 07/17/2021] [Accepted: 08/03/2021] [Indexed: 11/28/2022]
Abstract
Sensory encoding (how stimuli evoke sensory responses) is known to progress from low- to high-level features. Decoding (how responses lead to perception) is less understood but is often assumed to follow the same hierarchy. Accordingly, orientation decoding must occur in low-level areas such as V1, without cross-fixation interactions. However, a study, Ding, Cueva, Tsodyks, and Qian (2017), provided evidence against the assumption and proposed that visual decoding may often follow a high-to-low-level hierarchy in working memory, where higher-to-lower-level constraints introduce interactions among lower-level features. If two orientations on opposite sides of the fixation are both task relevant and enter working memory, then they should interact with each other. We indeed found the predicted cross-fixation interactions (repulsion and correlation) between orientations. Control experiments and analyses ruled out alternative explanations such as reporting bias and adaptation across trials on the same side of the fixation. Moreover, we explained the data using a retrospective high-to-low-level Bayesian decoding framework.
Collapse
Affiliation(s)
- Long Luu
- Department of Neuroscience, Zuckerman Institute, Department of Physiology & Cellular Biophysics, Columbia University, New York, NY 10027, USA
| | - Mingsha Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Misha Tsodyks
- Simons Center for Systems Biology, School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540, USA
| | - Ning Qian
- Department of Neuroscience, Zuckerman Institute, Department of Physiology & Cellular Biophysics, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
3
|
Nakashima Y, Iijima T, Sugita Y. Surround-contingent motion aftereffect. Vision Res 2015; 117:9-15. [PMID: 26459145 DOI: 10.1016/j.visres.2015.09.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Revised: 09/25/2015] [Accepted: 09/28/2015] [Indexed: 11/26/2022]
Abstract
We investigated whether motion aftereffects (MAE) can be contingent on surroundings. Random dots moving leftward and rightward were presented in alternation. Moving dots were surrounded by an open circle or an open square. After prolonged exposure to these stimuli, MAE were found to be contingent upon the surrounding frames: dots moving in a random direction appeared moving leftward when surrounded by the frame that was presented in conjunction with rightward motion. The effect lasted for 24h and was observed when adapter and test stimuli were presented not only retinotopically, but also at the same spatiotopic position. Furthermore, the effect was observed even when the adapter and test stimuli were presented at different retinotopic and spatiotopic positions as long as they were presented in the same hemi-field. These results indicate that MAE would be influenced not only by the stimulus features, but also by their surroundings, and they suggest that the surround-contingent MAE might be mediated in the higher stage of the motion processing pathway.
Collapse
Affiliation(s)
- Yusuke Nakashima
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan.
| | - Takumi Iijima
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan
| | - Yoichi Sugita
- Department of Psychology, Waseda University, 1-24-1 Toyama, Shinjuku-ku 162-8644, Tokyo, Japan
| |
Collapse
|
4
|
Lin Z, He S. Emergent filling in induced by motion integration reveals a high-level mechanism in filling in. Psychol Sci 2012; 23:1534-41. [PMID: 23085642 PMCID: PMC3875405 DOI: 10.1177/0956797612446348] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The visual system is intelligent--it is capable of recovering a coherent surface from an incomplete one, a feat known as perceptual completion or filling in. Traditionally, it has been assumed that surface features are interpolated in a way that resembles the fragmented parts. Using displays featuring four circular apertures, we showed in the study reported here that a distinct completed feature (horizontal motion) arises from local ones (oblique motions)-we term this process emergent filling in. Adaptation to emergent filling-in motion generated a dynamic motion aftereffect that was not due to spreading of local motion from the isolated apertures. The filling-in motion aftereffect occurred in both modal and amodal completions, and it was modulated by selective attention. These findings highlight the importance of high-level interpolation processes in filling in and are consistent with the idea that during emergent filling in, the more cognitive-symbolic processes in later areas (e.g., the middle temporal visual area and the lateral occipital complex) provide important feedback signals to guide more isomorphic processes in earlier areas (V1 and V2).
Collapse
Affiliation(s)
- Zhicheng Lin
- Department of Psychology, University of Minnesota, Twin Cities, USA.
| | | |
Collapse
|
5
|
The fastest (and simplest), the earliest: The locus of processing of rapid forms of motion aftereffect. Neuropsychologia 2011; 49:2929-34. [DOI: 10.1016/j.neuropsychologia.2011.06.020] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2011] [Revised: 05/26/2011] [Accepted: 06/17/2011] [Indexed: 11/18/2022]
|
6
|
Adaptation to biological motion leads to a motion and a form aftereffect. Atten Percept Psychophys 2011; 73:1843-55. [DOI: 10.3758/s13414-011-0133-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Biber U, Ilg UJ. Visual stability and the motion aftereffect: a psychophysical study revealing spatial updating. PLoS One 2011; 6:e16265. [PMID: 21298104 PMCID: PMC3027650 DOI: 10.1371/journal.pone.0016265] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2010] [Accepted: 12/08/2010] [Indexed: 11/21/2022] Open
Abstract
Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33±7% vs. 27±4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.
Collapse
Affiliation(s)
- Ulrich Biber
- Hertie-Institute for Clinical Brain Research, Department of Cognitive Neurology, University of Tübingen, Tübingen, Germany.
| | | |
Collapse
|
8
|
Barraclough NE, Keith RH, Xiao D, Oram MW, Perrett DI. Visual Adaptation to Goal-directed Hand Actions. J Cogn Neurosci 2009; 21:1806-20. [DOI: 10.1162/jocn.2008.21145] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Prolonged exposure to visual stimuli, or adaptation, often results in an adaptation “aftereffect” which can profoundly distort our perception of subsequent visual stimuli. This technique has been commonly used to investigate mechanisms underlying our perception of simple visual stimuli, and more recently, of static faces. We tested whether humans would adapt to movies of hands grasping and placing different weight objects. After adapting to hands grasping light or heavy objects, subsequently perceived objects appeared relatively heavier, or lighter, respectively. The aftereffects increased logarithmically with adaptation action repetition and decayed logarithmically with time. Adaptation aftereffects also indicated that perception of actions relies predominantly on view-dependent mechanisms. Adapting to one action significantly influenced the perception of the opposite action. These aftereffects can only be explained by adaptation of mechanisms that take into account the presence/absence of the object in the hand. We tested if evidence on action processing mechanisms obtained using visual adaptation techniques confirms underlying neural processing. We recorded monkey superior temporal sulcus (STS) single-cell responses to hand actions. Cells sensitive to grasping or placing typically responded well to the opposite action; cells also responded during different phases of the actions. Cell responses were sensitive to the view of the action and were dependent upon the presence of the object in the scene. We show here that action processing mechanisms established using visual adaptation parallel the neural mechanisms revealed during recording from monkey STS. Visual adaptation techniques can thus be usefully employed to investigate brain mechanisms underlying action perception.
Collapse
Affiliation(s)
- Nick E. Barraclough
- 1University of Hull, Hull, East Yorkshire, UK
- 2University of St Andrews, Scotland, UK
| | | | | | | | | |
Collapse
|
9
|
Mather G, Pavan A, Campana G, Casco C. The motion aftereffect reloaded. Trends Cogn Sci 2008; 12:481-7. [PMID: 18951829 PMCID: PMC3087115 DOI: 10.1016/j.tics.2008.09.002] [Citation(s) in RCA: 101] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2008] [Revised: 09/15/2008] [Accepted: 09/15/2008] [Indexed: 11/24/2022]
Abstract
The motion aftereffect is a robust illusion of visual motion resulting from exposure to a moving pattern. There is a widely accepted explanation of it in terms of changes in the response of cortical direction-selective neurons. Research has distinguished several variants of the effect. Converging recent evidence from different experimental techniques (psychophysics, single-unit recording, brain imaging, transcranial magnetic stimulation, visual evoked potentials and magnetoencephalography) reveals that adaptation is not confined to one or even two cortical areas, but occurs at multiple levels of processing involved in visual motion analysis. A tentative motion-processing framework is described, based on motion aftereffect research. Recent ideas on the function of adaptation see it as a form of gain control that maximises the efficiency of information transmission at multiple levels of the visual pathway.
Collapse
Affiliation(s)
- George Mather
- Department of Psychology, University of Sussex, Falmer, Brighton, BN1 9QH, UK.
| | | | | | | |
Collapse
|
10
|
Adaptation across the cortical hierarchy: low-level curve adaptation affects high-level facial-expression judgments. J Neurosci 2008; 28:3374-83. [PMID: 18367604 DOI: 10.1523/jneurosci.0182-08.2008] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Adaptation is ubiquitous in sensory processing. Although sensory processing is hierarchical, with neurons at higher levels exhibiting greater degrees of tuning complexity and invariance than those at lower levels, few experimental or theoretical studies address how adaptation at one hierarchical level affects processing at others. Nevertheless, this issue is critical for understanding cortical coding and computation. Therefore, we examined whether perception of high-level facial expressions can be affected by adaptation to low-level curves (i.e., the shape of a mouth). After adapting to a concave curve, subjects more frequently perceived faces as happy, and after adapting to a convex curve, subjects more frequently perceived faces as sad. We observed this multilevel aftereffect with both cartoon and real test faces when the adapting curve and the mouths of the test faces had the same location. However, when we placed the adapting curve 0.2 degrees below the test faces, the effect disappeared. Surprisingly, this positional specificity held even when real faces, instead of curves, were the adapting stimuli, suggesting that it is a general property for facial-expression aftereffects. We also studied the converse question of whether face adaptation affects curvature judgments, and found such effects after adapting to a cartoon face, but not a real face. Our results suggest that there is a local component in facial-expression representation, in addition to holistic representations emphasized in previous studies. By showing that adaptation can propagate up the cortical hierarchy, our findings also challenge existing functional accounts of adaptation.
Collapse
|