1
|
Saleki S, Ziman K, Hartstein KC, Cavanagh P, Tse PU. Endogenous attention biases transformational apparent motion based on high-level shape representations. J Vis 2022; 22:16. [DOI: 10.1167/jov.22.12.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Affiliation(s)
- Sharif Saleki
- Department of Psychological and Brain Sciences, Dartmouth College, NH, USA
| | - Kirsten Ziman
- Department of Psychological and Brain Sciences, Dartmouth College, NH, USA
| | - Kevin C. Hartstein
- Department of Psychological and Brain Sciences, Dartmouth College, NH, USA
| | - Patrick Cavanagh
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- Department of Psychology, Glendon College, Toronto, Ontario, Canada
| | - Peter U. Tse
- Department of Psychological and Brain Sciences, Dartmouth College, NH, USA
| |
Collapse
|
2
|
The neural mechanisms underlying directional and apparent circular motion assessed with repetitive transcranial magnetic stimulation (rTMS). Neuropsychologia 2020; 149:107656. [DOI: 10.1016/j.neuropsychologia.2020.107656] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 07/17/2020] [Accepted: 10/12/2020] [Indexed: 01/10/2023]
|
3
|
Grossberg S. The resonant brain: How attentive conscious seeing regulates action sequences that interact with attentive cognitive learning, recognition, and prediction. Atten Percept Psychophys 2019; 81:2237-2264. [PMID: 31218601 PMCID: PMC6848053 DOI: 10.3758/s13414-019-01789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface-shroud resonances support conscious seeing and action, whereas feature-category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure-ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive-emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Room 213, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Boston, MA, 02215, USA.
| |
Collapse
|
4
|
Gerardin P, Abbatecola C, Devinck F, Kennedy H, Dojat M, Knoblauch K. Neural circuits for long-range color filling-in. Neuroimage 2018; 181:30-43. [PMID: 29986833 DOI: 10.1016/j.neuroimage.2018.06.083] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 06/26/2018] [Accepted: 06/29/2018] [Indexed: 10/28/2022] Open
Abstract
Surface color appearance depends on both local surface chromaticity and global context. How are these inter-dependencies supported by cortical networks? Combining functional imaging and psychophysics, we examined if color from long-range filling-in engages distinct pathways from responses caused by a field of uniform chromaticity. We find that color from filling-in is best classified and best correlated with appearance by two dorsal areas, V3A and V3B/KO. In contrast, a field of uniform chromaticity is best classified by ventral areas hV4 and LO. Dynamic causal modeling revealed feedback modulation from area V3A to areas V1 and LO for filling-in, contrasting with feedback from LO modulating areas V1 and V3A for a matched uniform chromaticity. These results indicate a dorsal stream role in color filling-in via feedback modulation of area V1 coupled with a cross-stream modulation of ventral areas suggesting that local and contextual influences on color appearance engage distinct neural networks.
Collapse
Affiliation(s)
- Peggy Gerardin
- Univ Lyon, Université, Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500, Bron, France.
| | - Clément Abbatecola
- Univ Lyon, Université, Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500, Bron, France
| | | | - Henry Kennedy
- Univ Lyon, Université, Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500, Bron, France
| | - Michel Dojat
- Univ. Grenoble Alpes, Inserm, CHU Grenoble Alpes, GIN, 38000, Grenoble, France
| | - Kenneth Knoblauch
- Univ Lyon, Université, Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500, Bron, France.
| |
Collapse
|
5
|
Sanda N, Cerliani L, Authié CN, Sabbah N, Sahel JA, Habas C, Safran AB, Thiebaut de Schotten M. Visual brain plasticity induced by central and peripheral visual field loss. Brain Struct Funct 2018; 223:3473-3485. [PMID: 29936553 PMCID: PMC6132657 DOI: 10.1007/s00429-018-1700-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 06/15/2018] [Indexed: 01/22/2023]
Abstract
Disorders that specifically affect central and peripheral vision constitute invaluable models to study how the human brain adapts to visual deafferentation. We explored cortical changes after the loss of central or peripheral vision. Cortical thickness (CoTks) and resting-state cortical entropy (rs-CoEn), as a surrogate for neural and synaptic complexity, were extracted in 12 Stargardt macular dystrophy, 12 retinitis pigmentosa (tunnel vision stage), and 14 normally sighted subjects. When compared to controls, both groups with visual loss exhibited decreased CoTks in dorsal area V3d. Peripheral visual field loss also showed a specific CoTks decrease in early visual cortex and ventral area V4, while central visual field loss in dorsal area V3A. Only central visual field loss exhibited increased CoEn in LO-2 area and FG1. Current results revealed biomarkers of brain plasticity within the dorsal and the ventral visual streams following central and peripheral visual field defects.
Collapse
Affiliation(s)
- Nicolae Sanda
- Sorbonne Universités, UPMC Université Paris 06, UMR S968, Institut de la Vision, 75012, Paris, France.
- INSERM, U968, Institut de la Vision, 75012, Paris, France.
- CNRS, UMR 7210, Institut de la Vision, 75012, Paris, France.
- Centre d'investigation clinique, Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423, 75012, Paris, France.
- Department of Clinical Neurosciences, Geneva University Hospital and Geneva University School of Medicine, Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland.
| | - Leonardo Cerliani
- Frontlab, UPMC Univ Paris 06, Inserm, CNRS, Institut du cerveau et la moelle (ICM), Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, 75013, Paris, France
- Brain Connectivity and Behaviour Group, Sorbonne University, Paris, France
- Department of Psychiatry, Academic Medical Centre, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Colas N Authié
- Sorbonne Universités, UPMC Université Paris 06, UMR S968, Institut de la Vision, 75012, Paris, France
- INSERM, U968, Institut de la Vision, 75012, Paris, France
- CNRS, UMR 7210, Institut de la Vision, 75012, Paris, France
- Centre d'investigation clinique, Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423, 75012, Paris, France
| | - Norman Sabbah
- Sorbonne Universités, UPMC Université Paris 06, UMR S968, Institut de la Vision, 75012, Paris, France
- INSERM, U968, Institut de la Vision, 75012, Paris, France
- CNRS, UMR 7210, Institut de la Vision, 75012, Paris, France
- Centre d'investigation clinique, Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423, 75012, Paris, France
| | - José-Alain Sahel
- Sorbonne Universités, UPMC Université Paris 06, UMR S968, Institut de la Vision, 75012, Paris, France
- INSERM, U968, Institut de la Vision, 75012, Paris, France
- CNRS, UMR 7210, Institut de la Vision, 75012, Paris, France
- Centre d'investigation clinique, Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423, 75012, Paris, France
- Institute of Ophthalmology, University College of London, London, UK
- Fondation Ophtalmologique Adolphe de Rothschild, Paris, France
- Department of Ophthalmology, School of Medicine, University of Pittsburg, Pittsburg, USA
| | - Christophe Habas
- Sorbonne Universités, UPMC Université Paris 06, UMR S968, Institut de la Vision, 75012, Paris, France
- INSERM, U968, Institut de la Vision, 75012, Paris, France
- CNRS, UMR 7210, Institut de la Vision, 75012, Paris, France
- Centre de Neuroimagerie, Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, 75012, Paris, France
| | - Avinoam B Safran
- Sorbonne Universités, UPMC Université Paris 06, UMR S968, Institut de la Vision, 75012, Paris, France
- INSERM, U968, Institut de la Vision, 75012, Paris, France
- CNRS, UMR 7210, Institut de la Vision, 75012, Paris, France
- Centre d'investigation clinique, Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, INSERM-DHOS CIC 1423, 75012, Paris, France
- Department of Clinical Neurosciences, Geneva University Hospital and Geneva University School of Medicine, Gabrielle-Perret-Gentil 4, 1205, Geneva, Switzerland
| | - Michel Thiebaut de Schotten
- Frontlab, UPMC Univ Paris 06, Inserm, CNRS, Institut du cerveau et la moelle (ICM), Hôpital Pitié-Salpêtrière, Boulevard de l'hôpital, 75013, Paris, France
- Brain Connectivity and Behaviour Group, Sorbonne University, Paris, France
- Groupe d'Imagerie Neurofonctionnelle, Institut des Maladies Neurodégénératives, UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| |
Collapse
|
6
|
Erlikhman G, Caplovitz GP, Gurariy G, Medina J, Snow JC. Towards a unified perspective of object shape and motion processing in human dorsal cortex. Conscious Cogn 2018; 64:106-120. [PMID: 29779844 DOI: 10.1016/j.concog.2018.04.016] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 04/20/2018] [Accepted: 04/26/2018] [Indexed: 01/06/2023]
Abstract
Although object-related areas were discovered in human parietal cortex a decade ago, surprisingly little is known about the nature and purpose of these representations, and how they differ from those in the ventral processing stream. In this article, we review evidence for the unique contribution of object areas of dorsal cortex to three-dimensional (3-D) shape representation, the localization of objects in space, and in guiding reaching and grasping actions. We also highlight the role of dorsal cortex in form-motion interaction and spatiotemporal integration, possible functional relationships between 3-D shape and motion processing, and how these processes operate together in the service of supporting goal-directed actions with objects. Fundamental differences between the nature of object representations in the dorsal versus ventral processing streams are considered, with an emphasis on how and why dorsal cortex supports veridical (rather than invariant) representations of objects to guide goal-directed hand actions in dynamic visual environments.
Collapse
Affiliation(s)
| | | | - Gennadiy Gurariy
- Department of Psychology, University of Nevada, Reno, USA; Department of Psychology, University of Wisconsin, Milwaukee, USA
| | - Jared Medina
- Department of Psychological and Brain Sciences, University of Delaware, USA
| | | |
Collapse
|
7
|
Schindler A, Bartels A. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow. Cereb Cortex 2018; 27:2885-2893. [PMID: 27222382 DOI: 10.1093/cercor/bhw136] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding.
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience and.,Department of Psychology, University of Tübingen, Tübingen 72076, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience and.,Department of Psychology, University of Tübingen, Tübingen 72076, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| |
Collapse
|
8
|
Hu B, Yue S, Zhang Z. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2803-2821. [PMID: 27831890 DOI: 10.1109/tnnls.2016.2592969] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Collapse
Affiliation(s)
- Bin Hu
- College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Shigang Yue
- School of Computer Science, University of Lincoln, Lincoln, U.K
| | - Zhuhong Zhang
- College of Big Data and Information Engineering, Guizhou University, Guiyang, China
| |
Collapse
|
9
|
Frank SM, Greenlee MW, Tse PU. Long Time No See: Enduring Behavioral and Neuronal Changes in Perceptual Learning of Motion Trajectories 3 Years After Training. Cereb Cortex 2017; 28:1260-1271. [DOI: 10.1093/cercor/bhx039] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Indexed: 11/13/2022] Open
Affiliation(s)
- Sebastian M Frank
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mark W Greenlee
- Institute for Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Peter U Tse
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
10
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
11
|
Parietal cortex mediates perceptual Gestalt grouping independent of stimulus size. Neuroimage 2016; 133:367-377. [DOI: 10.1016/j.neuroimage.2016.03.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Revised: 01/22/2016] [Accepted: 03/04/2016] [Indexed: 11/19/2022] Open
|
12
|
Erlikhman G, Gurariy G, Mruczek REB, Caplovitz GP. The neural representation of objects formed through the spatiotemporal integration of visual transients. Neuroimage 2016; 142:67-78. [PMID: 27033688 DOI: 10.1016/j.neuroimage.2016.03.044] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2015] [Revised: 03/15/2016] [Accepted: 03/17/2016] [Indexed: 11/18/2022] Open
Abstract
Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time.
Collapse
Affiliation(s)
| | | | - Ryan E B Mruczek
- Department of Psychology, University of Nevada, Reno, USA; Department of Psychology, Worcester State University, USA
| | | |
Collapse
|
13
|
Spatiotemporal Form Integration: sequentially presented inducers can lead to representations of stationary and rigidly rotating objects. Atten Percept Psychophys 2015; 77:2740-54. [PMID: 26269386 DOI: 10.3758/s13414-015-0967-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objects in the world often are occluded and in motion. The visible fragments of such objects are revealed at different times and locations in space. To form coherent representations of the surfaces of these objects, the visual system must integrate local form information over space and time. We introduce a new illusion in which a rigidly rotating square is perceived on the basis of sequentially presented Pacman inducers. The illusion highlights two fundamental processes that allow us to perceive objects whose form features are revealed over time: Spatiotemporal Form Integration (STFI) and Position Updating. STFI refers to the spatial integration of persistent representations of local form features across time. Position updating of these persistent form representations allows them to be integrated into a rigid global motion percept. We describe three psychophysical experiments designed to identify spatial and temporal constraints that underlie these two processes and a fourth experiment that extends these findings to more ecologically valid stimuli. Our results indicate that although STFI can occur across relatively long delays between successive inducers (i.e., greater than 500 ms), position updating is limited to a more restricted temporal window (i.e., ~300 ms or less), and to a confined range of spatial (mis)alignment. These findings lend insight into the limits of mechanisms underlying the visual system's capacity to integrate transient, piecemeal form information, and support coherent object representations in the ever-changing environment.
Collapse
|
14
|
Strother L, Killebrew KW, Caplovitz GP. The lemon illusion: seeing curvature where there is none. Front Hum Neurosci 2015; 9:95. [PMID: 25755640 PMCID: PMC4337333 DOI: 10.3389/fnhum.2015.00095] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 02/05/2015] [Indexed: 11/17/2022] Open
Abstract
Curvature is a highly informative visual cue for shape perception and object recognition. We introduce a novel illusion—the Lemon Illusion—in which subtle illusory curvature is perceived along contour regions that are devoid of physical curvature. We offer several perceptual demonstrations and observations that lead us to conclude that the Lemon Illusion is an instance of a more general illusory curvature phenomenon, one in which the presence of contour curvature discontinuities lead to the erroneous extension of perceived curvature. We propose that this erroneous extension of perceived curvature results from the interaction of neural mechanisms that operate on spatially local contour curvature signals with higher-tier mechanisms that serve to establish more global representations of object shape. Our observations suggest that the Lemon Illusion stems from discontinuous curvature transitions between rectilinear and curved contour segments. However, the presence of curvature discontinuities is not sufficient to produce the Lemon Illusion, and the minimal conditions necessary to elicit this subtle and insidious illusion are difficult to pin down.
Collapse
Affiliation(s)
- Lars Strother
- Department of Psychology, University of Nevada Reno, NV, USA
| | | | | |
Collapse
|
15
|
Beyond Simple and Complex Neurons: Towards Intermediate-level Representations of Shapes and Objects. KUNSTLICHE INTELLIGENZ 2015. [DOI: 10.1007/s13218-014-0341-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
16
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Karthik Srinivasan
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| |
Collapse
|
17
|
Tschechne S, Neumann H. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation. Front Comput Neurosci 2014; 8:93. [PMID: 25157228 PMCID: PMC4127482 DOI: 10.3389/fncom.2014.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Accepted: 07/22/2014] [Indexed: 11/13/2022] Open
Abstract
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
Collapse
Affiliation(s)
- Stephan Tschechne
- Faculty of Engineering and Computer Science (with Psychology and Education), Institute of Neural Information Processing, Ulm UniversityUlm, Germany
| | | |
Collapse
|
18
|
Fesi JD, Thomas AL, Gilmore RO. Cortical responses to optic flow and motion contrast across patterns and speeds. Vision Res 2014; 100:56-71. [PMID: 24751405 DOI: 10.1016/j.visres.2014.04.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2013] [Revised: 03/05/2014] [Accepted: 04/09/2014] [Indexed: 11/26/2022]
Abstract
Motion provides animals with fast and robust cues for navigation and object detection. In the first case, stereotyped patterns of optic flow inform a moving observer about the direction and speed of its own movement. In the case of object detection, regional differences in motion allow for the segmentation of figures from their background, even in the absence of color or shading cues. Previous research has investigated human electrophysiological responses to global motion across speeds, but only focused upon one type of optic flow pattern. Here, we compared steady-state visual evoked potential (SSVEP) responses across patterns and speeds, both for optic flow and for motion-defined figure patterns, to assess the extent to which the processes are pattern-general or pattern-specific. For optic flow, pattern and speed effects on response amplitudes varied substantially across channels, suggesting pattern-specific processing at slow speeds and pattern-general activity at fast speeds. Responses for coherence- and direction-defined figures were comparatively more uniform, with similar response profiles and spatial distributions. Self- and object-motion patterns activate some of the same circuits, but these data suggest differential sensitivity: not only across the two classes of motion, but also across the patterns within each class, and across speeds. Thus, the results demonstrate that cortical processing of global motion is complex and activates a distributed network.
Collapse
Affiliation(s)
- Jeremy D Fesi
- Department of Ophthalmology, McGill University, 687 Pine Avenue West, Montreal, QC H3A 1A1, Canada.
| | - Amanda L Thomas
- Department of Psychology, The Pennsylvania State University, 114 Moore Building, University Park, PA 16802, United States
| | - Rick O Gilmore
- Department of Psychology, The Pennsylvania State University, 114 Moore Building, University Park, PA 16802, United States; Social, Life, & Engineering Sciences Imaging Center, The Pennsylvania State University, University Park, PA 16802, United States
| |
Collapse
|
19
|
The global slowdown effect: why does perceptual grouping reduce perceived speed? Atten Percept Psychophys 2014; 76:780-92. [PMID: 24448695 DOI: 10.3758/s13414-013-0607-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The percept of four rotating dot pairs is bistable. The "local percept" is of four pairs of dots rotating independently. The "global percept" is of two large squares translating over one another (Anstis & Kim 2011). We have previously demonstrated (Kohler, Caplovitz, & Tse 2009) that the global percept appears to move more slowly than the local percept. Here, we investigate and rule out several hypotheses for why this may be the case. First, we demonstrate that the global slowdown effect does not occur because the global percept is of larger objects than the local percept. Second, we show that the global slowdown effect is not related to rotation-specific detectors that may be more active in the local than in the global percept. Third, we find that the effect is also not due to a reduction of image elements during grouping and can occur with a stimulus very different from the one used previously. This suggests that the effect may reflect a general property of perceptual grouping. Having ruled out these possibilities, we suggest that the global slowdown effect may arise from emergent motion signals that are generated by the moving dots, which are interpreted as the ends of "barbell bars" in the local percept or the corners of the illusory squares in the global percept. Alternatively, the effect could be the result of noisy sources of motion information that arise from perceptual grouping that, in turn, increase the influence of Bayesian priors toward slow motion (Weiss, Simoncelli, & Adelson 2002).
Collapse
|
20
|
Jóhannesson OI, Sigurdardottir KÓ, Kristjánsson A. Searching for bumps and ellipses on the ground and in the sky: no advantage for the ground plane. Vision Res 2013; 92:26-32. [PMID: 24025995 DOI: 10.1016/j.visres.2013.09.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2013] [Revised: 08/23/2013] [Accepted: 09/02/2013] [Indexed: 11/30/2022]
Abstract
A staple of modern theories of vision is that the visual system has evolved to perceive cues containing the most predictive information about the layout of the environment. This entails the prediction that - other things being equal - visual performance in a familiar setting should be superior to performance in an unfamiliar one. Visual performance should therefore be better on the familiar ground plane compared to an implied sky or wall plane. We tested this comparing visual search for stimuli presented in an implied ground plane with search on a 180° rotated search display so that the stimuli appeared in an implied "sky" plane, and with search in a random layout implying no depth. This was tested for stimuli with, or without, curvature discontinuities, that have previously been shown to be strong cues for shape analysis. Surprisingly, no advantage of the ground plane over the sky plane was observed, while a strong effect of layout regularity was seen. Similarly, in experiment 2 there was little effect of placing the stimuli on an implied wall plane compared to the ground or the sky. The results are not explained by assuming that curvature discontinuities are such strong cues that they overshadow any effect of depth-plane, since there was a strong effect of regular versus random layout, which should also have disappeared under this account. The results argue instead for a very strong effect of layout regularity, unrelated to environmental regularities in evolutionary history, since there was no ground-plane benefit.
Collapse
Affiliation(s)
- Omar I Jóhannesson
- Laboratory for Visual Perception and Visuomotor Control, Faculty of Psychology, School of Health Sciences, University of Iceland, Oddi, 101 Reykjavík, Iceland.
| | | | | |
Collapse
|
21
|
Abstract
AbstractThe dissociation of a figure from its background is an essential feat of visual perception, as it allows us to detect, recognize, and interact with shapes and objects in our environment. In order to understand how the human brain gives rise to the perception of figures, we here review experiments that explore the links between activity in visual cortex and performance of perceptual tasks related to figure perception. We organize our review according to a proposed model that attempts to contextualize figure processing within the more general framework of object processing in the brain. Overall, the current literature provides us with individual linking hypotheses as to cortical regions that are necessary for particular tasks related to figure perception. Attempts to reach a more complete understanding of how the brain instantiates figure and object perception, however, will have to consider the temporal interaction between the many regions involved, the details of which may vary widely across different tasks.
Collapse
|
22
|
Blair CD, Goold J, Killebrew K, Caplovitz GP. Form features provide a cue to the angular velocity of rotating objects. J Exp Psychol Hum Percept Perform 2013; 40:116-28. [PMID: 23750970 DOI: 10.1037/a0033055] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
As an object rotates, each location on the object moves with an instantaneous linear velocity, dependent upon its distance from the center of rotation, whereas the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different-sized objects, as changing the size of an object changes the linear velocity of each location on the object's surface, while maintaining the object's angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high-contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object.
Collapse
|
23
|
Abstract
Most living things and many nonliving things deform as they move, requiring observers to separate object motions from object deformations. When the object is partially occluded, the task becomes more difficult because it is not possible to use two-dimensional (2-D) contour correlations (Cohen, Jain, & Zaidi, 2010). That leaves dynamic depth matching across the unoccluded views as the main possibility. We examined the role of stereo cues in extracting motion of partially occluded and deforming three-dimensional (3-D) objects, simulated by disk-shaped random-dot stereograms set at randomly assigned depths and placed uniformly around a circle. The stereo-disparities of the disks were temporally oscillated to simulate clockwise or counterclockwise rotation of the global shape. To dynamically deform the global shape, random disparity perturbation was added to each disk's depth on each stimulus frame. At low perturbation, observers reported rotation directions consistent with the global shape, even against local motion cues, but performance deteriorated at high perturbation. Using 3-D global shape correlations, we formulated an optimal Bayesian discriminator for rotation direction. Based on rotation discrimination thresholds, human observers were 75% as efficient as the optimal model, demonstrating that global shapes derived from stereo cues facilitate inferences of object motions. To complement reports of stereo and motion integration in extrastriate cortex, our results suggest the possibilities that disparity selectivity and feature tracking are linked, or that global motion selective neurons can be driven purely from disparity cues.
Collapse
Affiliation(s)
- Anshul Jain
- Graduate Center for Vision Research, SUNY College of Optometry, New York, NY, USA.
| | | |
Collapse
|
24
|
Kujovic M, Zilles K, Malikovic A, Schleicher A, Mohlberg H, Rottschy C, Eickhoff SB, Amunts K. Cytoarchitectonic mapping of the human dorsal extrastriate cortex. Brain Struct Funct 2013; 218:157-72. [PMID: 22354469 PMCID: PMC3535362 DOI: 10.1007/s00429-012-0390-9] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2011] [Accepted: 01/31/2012] [Indexed: 11/06/2022]
Abstract
The dorsal visual stream consists of several functionally specialized areas, but most of their cytoarchitectonic correlates have not yet been identified in the human brain. The cortex adjacent to Brodmann area 18/V2 was therefore analyzed in serial sections of ten human post-mortem brains using morphometrical and multivariate statistical analyses for the definition of areal borders. Two previously unknown cytoarchitectonic areas (hOc3d, hOc4d) were detected. They occupy the medial and, to a smaller extent, lateral surface of the occipital lobe. The larger area, hOc3d, is located dorso-lateral to area V2 in the region of superior and transverse occipital, as well as parieto-occipital sulci. Area hOc4d was identified rostral to hOc3d; it differed from the latter by larger pyramidal cells in lower layer III, thinner layers V and VI, and a sharp cortex-white-matter borderline. The delineated areas were superimposed in the anatomical MNI space, and probabilistic maps were calculated. They show a relatively high intersubject variability in volume and position. Based on their location and neighborhood relationship, areas hOc3d and hOc4d are putative anatomical substrates of functionally defined areas V3d and V3a, a hypothesis that can now be tested by comparing probabilistic cytoarchitectonic maps and activation studies of the living human brain.
Collapse
Affiliation(s)
- Milenko Kujovic
- C. & O. Vogt Institute for Brain Research, University of Düsseldorf, Düsseldorf, Germany
| | - Karl Zilles
- C. & O. Vogt Institute for Brain Research, University of Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM 1, INM 2) and JARA, Translational Brain Medicine, Research Centre Jülich, 52425 Juelich, Germany
| | - Aleksandar Malikovic
- Institute of Neuroscience and Medicine (INM 1, INM 2) and JARA, Translational Brain Medicine, Research Centre Jülich, 52425 Juelich, Germany
- Institute of Anatomy, Faculty of Medicine, University of Belgrade, Belgrade, Serbia
| | - Axel Schleicher
- C. & O. Vogt Institute for Brain Research, University of Düsseldorf, Düsseldorf, Germany
| | - Hartmut Mohlberg
- Institute of Neuroscience and Medicine (INM 1, INM 2) and JARA, Translational Brain Medicine, Research Centre Jülich, 52425 Juelich, Germany
| | - Claudia Rottschy
- C. & O. Vogt Institute for Brain Research, University of Düsseldorf, Düsseldorf, Germany
| | - Simon B. Eickhoff
- C. & O. Vogt Institute for Brain Research, University of Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM 1, INM 2) and JARA, Translational Brain Medicine, Research Centre Jülich, 52425 Juelich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
| | - Katrin Amunts
- Institute of Neuroscience and Medicine (INM 1, INM 2) and JARA, Translational Brain Medicine, Research Centre Jülich, 52425 Juelich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
25
|
Matsuyoshi D, Ikeda T, Sawamoto N, Kakigi R, Fukuyama H, Osaka N. Differential roles for parietal and occipital cortices in visual working memory. PLoS One 2012; 7:e38623. [PMID: 22679514 PMCID: PMC3367960 DOI: 10.1371/journal.pone.0038623] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Accepted: 05/13/2012] [Indexed: 11/19/2022] Open
Abstract
Visual working memory (VWM) is known as a highly capacity-limited cognitive system that can hold 3–4 items. Recent studies have demonstrated that activity in the intraparietal sulcus (IPS) and occipital cortices correlates with the number of representations held in VWM. However, differences among those regions are poorly understood, particularly when task-irrelevant items are to be ignored. The present fMRI-based study investigated whether memory load-sensitive regions such as the IPS and occipital cortices respond differently to task-relevant information. Using a change detection task in which participants are required to remember pre-specified targets, here we show that while the IPS exhibited comparable responses to both targets and distractors, the dorsal occipital cortex manifested significantly weaker responses to an array containing distractors than to an array containing only targets, despite that the number of objects presented was the same for the two arrays. These results suggest that parietal and occipital cortices engage differently in distractor processing and that the dorsal occipital, rather than parietal, activity appears to reflect output of stimulus filtering and selection based on behavioral relevance.
Collapse
Affiliation(s)
- Daisuke Matsuyoshi
- Department of Psychology, Graduate School of Letters, Kyoto University, Yoshida-honmachi, Sakyo, Kyoto, Japan.
| | | | | | | | | | | |
Collapse
|
26
|
Foley NC, Grossberg S, Mingolla E. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding. Cogn Psychol 2012; 65:77-117. [PMID: 22425615 DOI: 10.1016/j.cogpsych.2012.02.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2011] [Revised: 01/07/2012] [Accepted: 02/02/2012] [Indexed: 11/18/2022]
Abstract
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.
Collapse
Affiliation(s)
- Nicholas C Foley
- Center for Adaptive Systems, Department of Cognitive and Neural Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA
| | | | | |
Collapse
|
27
|
Porter KB, Caplovitz GP, Kohler PJ, Ackerman CM, Tse PU. Rotational and translational motion interact independently with form. Vision Res 2011; 51:2478-87. [PMID: 22024049 DOI: 10.1016/j.visres.2011.10.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2011] [Revised: 10/07/2011] [Accepted: 10/09/2011] [Indexed: 10/16/2022]
Abstract
Do the mechanisms that underlie the perception of translational and rotational object motion show evidence of independent processing? By probing the perceived speed of translating and/or rotating objects, we find that an object's form contributes in independent ways to the processing of translational and rotational motion: In the context of translational motion, it has been shown that the more elongated an object is along its direction of motion, the faster it is perceived to translate; in the context of rotational motion, it has been shown that the sharper the maxima of curvature along an object's contour, the faster it appears to rotate. Here we demonstrate that such rotational form-motion interactions are due solely to the rotational component of combined rotational and translational motion. We conclude that the perception of rotational motion relies on form-motion interactions that are independent of the processing underlying translational motion.
Collapse
Affiliation(s)
- Katharine B Porter
- Department of Psychological and Brain Sciences, Dartmouth College, United States.
| | | | | | | | | |
Collapse
|
28
|
Wildenberg JC, Tyler ME, Danilov YP, Kaczmarek KA, Meyerand ME. Electrical tongue stimulation normalizes activity within the motion-sensitive brain network in balance-impaired subjects as revealed by group independent component analysis. Brain Connect 2011; 1:255-65. [PMID: 22433053 DOI: 10.1089/brain.2011.0029] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Multivariate analysis of functional magnetic resonance imaging (fMRI) data allows investigations into network behavior beyond simple activations of individual regions. We apply group independent component analysis to fMRI data collected in a previous study looking at the sustained neuromodulatory effects of electrical tongue stimulation in balance-impaired individuals. Twelve subjects with balance disorders viewed optic flow in an fMRI scanner before and after 5 days of electrical tongue stimulation. Nine healthy controls also viewed the visual stimuli but did not receive any stimulation. Multiple regression of the 47 estimated components found two that were modulated by the visual stimuli. Component 7, comprised primarily of the primary visual cortex (V1), responded to all visual stimuli and showed no difference in task-related activity between the healthy controls and the balance-impaired subjects before or after stimulation. Component 11 responded only to motion in the visual field and contained multiple cortical and subcortical regions involved in processing information pertinent to balance. Two-sample t-tests of the calculated signal change revealed that the task-related activity of this network is greater in balance-impaired subjects compared with controls before stimulation (p=0.02), but that this network hypersensitivity decreases after electrical tongue stimulation (p=0.001).
Collapse
Affiliation(s)
- Joseph C Wildenberg
- Neuroscience Training Program, University of Wisconsin, Madison, Wisconsin 53705, USA.
| | | | | | | | | |
Collapse
|
29
|
Adjacent visual representations of self-motion in different reference frames. Proc Natl Acad Sci U S A 2011; 108:11668-73. [PMID: 21709244 DOI: 10.1073/pnas.1102984108] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Recent investigations indicate that retinal motion is not directly available for perception when moving around [Souman JL, et al. (2010) J Vis 10:14], possibly pointing to suppression of retinal speed sensitivity in motion areas. Here, we investigated the distribution of retinocentric and head-centric representations of self-rotation in human lower-tier visual motion areas. Functional MRI responses were measured to a set of visual self-motion stimuli with different levels of simulated gaze and simulated head rotation. A parametric generalized linear model analysis of the blood oxygen level-dependent responses revealed subregions of accessory V3 area, V6(+) area, middle temporal area, and medial superior temporal area that were specifically modulated by the speed of the rotational flow relative to the eye and head. Pursuit signals, which link the two reference frames, were also identified in these areas. To our knowledge, these results are the first demonstration of multiple visual representations of self-motion in these areas. The existence of such adjacent representations points to early transformations of the reference frame for visual self-motion signals and a topography by visual reference frame in lower-order motion-sensitive areas. This suggests that visual decisions for action and perception may take into account retinal and head-centric motion signals according to task requirements.
Collapse
|
30
|
Cohen EH, Jain A, Zaidi Q. The utility of shape attributes in deciphering movements of non-rigid objects. J Vis 2010; 10:29. [PMID: 20884524 PMCID: PMC3334828 DOI: 10.1167/10.11.29] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Most moving objects in the world are non-rigid, changing shape as they move. To disentangle shape changes from movements, computational models either fit shapes to combinations of basis shapes or motion trajectories to combinations of oscillations but are biologically unfeasible in their input requirements. Recent neural models parse shapes into stored examples, which are unlikely to exist for general shapes. We propose that extracting shape attributes, e.g., symmetry, facilitates veridical perception of non-rigid motion. In a new method, identical dots were moved in and out along invisible spokes, to simulate the rotation of dynamically and randomly distorting shapes. Discrimination of rotation direction measured as a function of non-rigidity was 90% as efficient as the optimal Bayesian rotation decoder and ruled out models based on combining the strongest local motions. Remarkably, for non-rigid symmetric shapes, observers outperformed the Bayesian model when perceived rotation could correspond only to rotation of global symmetry, i.e., when tracking of shape contours or local features was uninformative. That extracted symmetry can drive perceived motion suggests that shape attributes may provide links across the dorsal-ventral separation between motion and shape processing. Consequently, the perception of non-rigid object motion could be based on representations that highlight global shape attributes.
Collapse
Affiliation(s)
- Elias H Cohen
- Graduate Center for Vision Research, State University of New York, College of Optometry, New York, NY 10036, USA.
| | | | | |
Collapse
|
31
|
Extrastriate cortical activity reflects segmentation of motion into independent sources. Neuropsychologia 2010; 48:2699-708. [PMID: 20478319 DOI: 10.1016/j.neuropsychologia.2010.05.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2010] [Revised: 04/02/2010] [Accepted: 05/09/2010] [Indexed: 11/21/2022]
Abstract
Identical local image motion signals can arise from countless object motions in the world. In order to resolve this ambiguity, the visual system must somehow integrate motion signals arising from different locations along an object's contour. Difficulties arise, however, because image contours can derive from multiple objects and from occlusion. Thus, correctly integrating respective objects' motion signals presupposes the specification of what counts as an object. Depending on how this form analysis problem is solved, dramatically different object motion percepts can be constructed from the same set of local image motions. Here we apply fMRI to investigate the mechanisms underlying the segmentation and integration of motion signals that are critical to motion perception in general. We hold the number of image objects constant, but vary whether these objects are perceived to move independently or not. We find that BOLD signal in V3v, V4v, V3A, V3B and MT varies with the number of distinct sources of motion information in the visual scene. These data support the hypothesis that these areas integrate form and motion information in order to segment motion into independent sources (i.e. objects) thereby overcoming ambiguities that arise at the earliest stages of motion processing.
Collapse
|
32
|
Abstract
The association of borders with "figure" rather than "background" provides a topological organizing principle for early vision. Such global influences have recently been shown to have local effects, with neuronal activity modulated by stimulus properties from well outside the classical receptive field. We extend the theoretical analysis of such phenomena by developing the geometry of interaction between shading, boundaries, and boundary ownership for smooth surfaces. The purely exterior edges of smooth objects enjoy a fold-type relationship between shading and boundary, due to foreshortening, while the background is cut off transversely. However, at cusp points in the image mapping the exterior boundary ends abruptly. Since such singular points are notoriously unstable, we conjecture that this process is regularized by a natural quantization of suggestive contours due to physiological boundary-detection mechanisms. The result extends a theorem about how contours must end to one that characterizes surface (Gaussian) curvature in the neighborhood of where they appear to end. Apparent contours and their interaction with local shading thus provide important monocular shape cues.
Collapse
|
33
|
|
34
|
View-invariant object category learning, recognition, and search: How spatial and object attention are coordinated using surface-based attentional shrouds. Cogn Psychol 2009; 58:1-48. [DOI: 10.1016/j.cogpsych.2008.05.001] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2007] [Accepted: 05/06/2008] [Indexed: 11/22/2022]
|
35
|
Caplovitz GP, Barroso DJ, Hsieh PJ, Tse PU. fMRI reveals that non-local processing in ventral retinotopic cortex underlies perceptual grouping by temporal synchrony. Hum Brain Mapp 2008; 29:651-61. [PMID: 17598165 PMCID: PMC6871124 DOI: 10.1002/hbm.20429] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
UNLABELLED When spatially separated objects appear and disappear in a synchronous manner, they perceptually group into a single global object that itself appears and disappears. We employed functional magnetic resonance imaging (fMRI) to identify brain regions involved in this type of perceptual grouping. Subjects viewed four chromatically-defined disks (one per visual quadrant) that flashed on and off. We contrasted %BOLD signal changes between blocks of synchronously flashing disks (Grouping) with blocks of asynchronously flashing disks (no-Grouping). RESULTS A region of interest analysis revealed %BOLD signal change in the Grouping condition was significantly greater than in the no-Grouping condition within retinotopic areas V2, V3, and V4v. Within a single quadrant of the visual field, the spatio-temporal information present in the image was identical across the two stimulus conditions. As such, the two conditions could not be distinguished from each other on the basis of the rate or pattern of flashing within a single visual quadrant. The observed results must therefore arise through nonlocal interactions between or within these retinotopic areas, or arise from outside these retinotopic areas. Furthermore, when V2 and V3 were split into ventral and dorsal sub-ROIs, ventral retinotopic areas V2v and V3v preferentially differentiated between the two conditions whereas the corresponding dorsal areas V2d and V3d did not. In contrast, within hMT+, %BOLD signal was significantly greater in the no-Grouping condition. CONCLUSION Nonlocal processing within, between, or to ventral retinotopic cortex at least as early as V2v, and including V3v, and V4v, underlies perceptual grouping via temporal synchrony.
Collapse
Affiliation(s)
- Gideon P Caplovitz
- Department of Psychological and Brain Sciences, Moore Hall, Dartmouth College, Hanover, New Hampshire 03755, USA.
| | | | | | | |
Collapse
|
36
|
Caplovitz GP, Tse PU. Rotating dotted ellipses: motion perception driven by grouped figural rather than local dot motion signals. Vision Res 2007; 47:1979-91. [PMID: 17548102 DOI: 10.1016/j.visres.2006.12.022] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2006] [Revised: 11/19/2006] [Accepted: 12/11/2006] [Indexed: 11/28/2022]
Abstract
UNLABELLED Unlike the motion of a continuous contour, the motion of a single dot is unambiguous and immune to the aperture problem. Here we exploit this fact to explore the conditions under which unambiguous local motion signals are used to drive global percepts of an ellipse undergoing rotation. In previous work, we have shown that a thin, high aspect ratio ellipse will appear to rotate faster than a lower aspect ratio ellipse even when the two in fact rotate at the same angular velocity [Caplovitz, G. P., Hsieh, P. -J., & Tse, P. U. (2006) Mechanisms underlying the perceived angular velocity of a rigidly rotating object. Vision Research, 46(18), 2877-2893]. In this study we examined the perceived speed of rotation of ellipses defined by a virtual contour made up of evenly spaced dots. RESULTS Ellipses defined by closely spaced dots exhibit the speed illusion observed with continuous contours. That is, thin dotted ellipses appear to rotate faster than fat dotted ellipses when both rotate at the same angular velocity. This illusion is not observed if the dots defining the ellipse are spaced too widely apart. A control experiment ruled out low spatial frequency "blurring" as the source of the illusory percept. CONCLUSION Even in the presence of local motion signals that are immune to the aperture problem, the global percept of an ellipse undergoing rotation can be driven by potentially ambiguous motion signals arising from the non-local form of the grouped ellipse itself. Here motion perception is driven by emergent motion signals such as those of virtual contours constructed by grouping procedures. Neither these contours nor their emergent motion signals are present in the image.
Collapse
Affiliation(s)
- G P Caplovitz
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | | |
Collapse
|
37
|
Tse PU, Caplovitz GP. Contour discontinuities subserve two types of form analysis that underlie motion processing. PROGRESS IN BRAIN RESEARCH 2007; 154:271-92. [PMID: 17010718 DOI: 10.1016/s0079-6123(06)54015-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
Form analysis subserves motion processing in at least two ways: first, in terms of figural segmentation dedicated to solving the problem of figure-to-figure matching over time, and second, in terms of defining trackable features whose unambiguous motion signals can be generalized to ambiguously moving portions of an object. The former is a primarily ventral process involving the lateral occipital complex and also retinotopic areas such as V2 and V4, and the latter is a dorsal process involving V3A. Contour discontinuities, such as corners, deep concavities, maxima of positive curvature, junctions, and terminators, play a central role in both types of form analysis. Transformational apparent motion will be discussed in the context of figural segmentation and matching, and rotational motion in the context of trackable features. In both cases the analysis of form must proceed in parallel with the analysis of motion, in order to constrain the ongoing analysis of motion.
Collapse
Affiliation(s)
- Peter Ulric Tse
- H B 6207, Moore Hall, Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | | |
Collapse
|