1
|
Zhang J, Zhou H, Wang S. Distinct visual processing networks for foveal and peripheral visual fields. Commun Biol 2024; 7:1259. [PMID: 39367101 PMCID: PMC11452663 DOI: 10.1038/s42003-024-06980-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 09/27/2024] [Indexed: 10/06/2024] Open
Abstract
Foveal and peripheral vision are two distinct modes of visual processing essential for navigating the world. However, it remains unclear if they engage different neural mechanisms and circuits within the visual attentional system. Here, we trained macaques to perform a free-gaze visual search task using natural face and object stimuli and recorded a large number of 14588 visually responsive units from a broadly distributed network of brain regions involved in visual attentional processing. Foveal and peripheral units had substantially different proportions across brain regions and exhibited systematic differences in encoding visual information and visual attention. The spike-local field potential (LFP) coherence of foveal units was more extensively modulated by both attention and visual selectivity, thus indicating differential engagement of the attention and visual coding network compared to peripheral units. Furthermore, we delineated the interaction and coordination between foveal and peripheral processing for spatial attention and saccade selection. Together, the systematic differences between foveal and peripheral processing provide valuable insights into how the brain processes and integrates visual information from different regions of the visual field.
Collapse
Affiliation(s)
- Jie Zhang
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, 63110, USA.
- Peng Cheng Laboratory, Shenzhen, 518000, China.
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Huihui Zhou
- Peng Cheng Laboratory, Shenzhen, 518000, China.
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis, St. Louis, MO, 63110, USA.
| |
Collapse
|
2
|
Zhang J, Zhou H, Wang S. Distinct visual processing networks for foveal and peripheral visual fields. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.24.600415. [PMID: 38979165 PMCID: PMC11230199 DOI: 10.1101/2024.06.24.600415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Foveal and peripheral vision are two distinct modes of visual processing essential for navigating the world. However, it remains unclear if they engage different neural mechanisms and circuits within the visual attentional system. Here, we trained macaques to perform a free-gaze visual search task using natural face and object stimuli and recorded a large number of 14588 visually responsive neurons from a broadly distributed network of brain regions involved in visual attentional processing. Foveal and peripheral units had substantially different proportions across brain regions and exhibited systematic differences in encoding visual information and visual attention. The spike-LFP coherence of foveal units was more extensively modulated by both attention and visual selectivity, thus indicating differential engagement of the attention and visual coding network compared to peripheral units. Furthermore, we delineated the interaction and coordination between foveal and peripheral processing for spatial attention and saccade selection. Finally, the search became more efficient with increasing target-induced desynchronization, and foveal and peripheral units exhibited different correlations between neural responses and search behavior. Together, the systematic differences between foveal and peripheral processing provide valuable insights into how the brain processes and integrates visual information from different regions of the visual field. Significance Statement This study investigates the systematic differences between foveal and peripheral vision, two crucial components of visual processing essential for navigating our surroundings. By simultaneously recording from a large number of neurons in the visual attentional neural network, we revealed substantial variations in the proportion and functional characteristics of foveal and peripheral units across different brain regions. We uncovered differential modulation of functional connectivity by attention and visual selectivity, elucidated the intricate interplay between foveal and peripheral processing in spatial attention and saccade selection, and linked neural responses to search behavior. Overall, our study contributes to a deeper understanding of how the brain processes and integrates visual information for active visual behaviors.
Collapse
|
3
|
Zhang J, Cao R, Zhu X, Zhou H, Wang S. Distinct attentional profile and functional connectivity of neurons with visual feature coding in the primate brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.24.600401. [PMID: 38979388 PMCID: PMC11230157 DOI: 10.1101/2024.06.24.600401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Visual attention and object recognition are two critical cognitive functions that significantly influence our perception of the world. While these neural processes converge on the temporal cortex, the exact nature of their interactions remains largely unclear. Here, we systematically investigated the interplay between visual attention and object feature coding by training macaques to perform a free-gaze visual search task using natural face and object stimuli. With a large number of units recorded from multiple brain areas, we discovered that units exhibiting visual feature coding displayed a distinct attentional response profile and functional connectivity compared to units not exhibiting feature coding. Attention directed towards search targets enhanced the pattern separation of stimuli across brain areas, and this enhancement was more pronounced for units encoding visual features. Our findings suggest two stages of neural processing, with the early stage primarily focused on processing visual features and the late stage dedicated to processing attention. Importantly, feature coding in the early stage could predict the attentional effect in the late stage. Together, our results suggest an intricate interplay between visual feature and attention coding in the primate brain, which can be attributed to the differential functional connectivity and neural networks engaged in these processes.
Collapse
|
4
|
Malik G, Crowder D, Mingolla E. Extreme image transformations affect humans and machines differently. BIOLOGICAL CYBERNETICS 2023; 117:331-343. [PMID: 37310489 PMCID: PMC10600046 DOI: 10.1007/s00422-023-00968-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/26/2023] [Indexed: 06/14/2023]
Abstract
Some recent artificial neural networks (ANNs) claim to model aspects of primate neural and human performance data. Their success in object recognition is, however, dependent on exploiting low-level features for solving visual tasks in a way that humans do not. As a result, out-of-distribution or adversarial input is often challenging for ANNs. Humans instead learn abstract patterns and are mostly unaffected by many extreme image distortions. We introduce a set of novel image transforms inspired by neurophysiological findings and evaluate humans and ANNs on an object recognition task. We show that machines perform better than humans for certain transforms and struggle to perform at par with humans on others that are easy for humans. We quantify the differences in accuracy for humans and machines and find a ranking of difficulty for our transforms for human data. We also suggest how certain characteristics of human visual processing can be adapted to improve the performance of ANNs for our difficult-for-machines transforms.
Collapse
Affiliation(s)
- Girik Malik
- Northeastern University, Boston, MA 02115 USA
| | | | | |
Collapse
|
5
|
Abstract
Perception and memory are traditionally thought of as separate cognitive functions, supported by distinct brain regions. The canonical perspective is that perceptual processing of visual information is supported by the ventral visual stream, whereas long-term declarative memory is supported by the medial temporal lobe. However, this modular framework cannot account for the increasingly large body of evidence that reveals a role for early visual areas in long-term recognition memory and a role for medial temporal lobe structures in high-level perceptual processing. In this article, we review relevant research conducted in humans, nonhuman primates, and rodents. We conclude that the evidence is largely inconsistent with theoretical proposals that draw sharp functional boundaries between perceptual and memory systems in the brain. Instead, the weight of the empirical findings is best captured by a representational-hierarchical model that emphasizes differences in content, rather than in cognitive processes within the ventral visual stream and medial temporal lobe.
Collapse
Affiliation(s)
- Chris B Martin
- Department of Psychology, Florida State University, Tallahassee, Florida, USA;
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada;
- Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| |
Collapse
|
6
|
A Pilot Investigation of Visual Pathways in Patients with Mild Traumatic Brain Injury. Neurol Int 2023; 15:534-548. [PMID: 36976675 PMCID: PMC10054811 DOI: 10.3390/neurolint15010032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/02/2023] [Accepted: 03/09/2023] [Indexed: 03/22/2023] Open
Abstract
In this study, we examined visual processing within primary visual areas (V1) in normal and visually impaired individuals who exhibit significant visual symptomology due to sports-related mild traumatic brain injury (mTBI). Five spatial frequency stimuli were applied to the right, left and both eyes in order to assess the visual processing of patients with sports-related mild traumatic brain injuries who exhibited visual abnormalities, i.e., photophobia, blurriness, etc., and controls. The measurement of the left/right eye and binocular integration was accomplished via the quantification of the spectral power and visual event-related potentials. The principal results have shown that the power spectral density (PSD) measurements display a distinct loss in the alpha band-width range, which corresponded to more instances of medium-sized receptive field loss. Medium-size receptive field loss may correspond to parvocellular (p-cell) processing deprecation. Our major conclusion provides a new measurement, using PSD analysis to assess mTBI conditions from primary V1 areas. The statistical analysis demonstrated significant differences between the mTBI and control cohort in the Visual Evoked Potentials (VEP) amplitude responses and PSD measurements. Additionally, the PSD measurements were able to assess the improvement in the mTBI primary visual areas over time through rehabilitation.
Collapse
|
7
|
Janini D, Hamblin C, Deza A, Konkle T. General object-based features account for letter perception. PLoS Comput Biol 2022; 18:e1010522. [PMID: 36155642 PMCID: PMC9536565 DOI: 10.1371/journal.pcbi.1010522] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 10/06/2022] [Accepted: 08/29/2022] [Indexed: 11/30/2022] Open
Abstract
After years of experience, humans become experts at perceiving letters. Is this visual capacity attained by learning specialized letter features, or by reusing general visual features previously learned in service of object categorization? To explore this question, we first measured the perceptual similarity of letters in two behavioral tasks, visual search and letter categorization. Then, we trained deep convolutional neural networks on either 26-way letter categorization or 1000-way object categorization, as a way to operationalize possible specialized letter features and general object-based features, respectively. We found that the general object-based features more robustly correlated with the perceptual similarity of letters. We then operationalized additional forms of experience-dependent letter specialization by altering object-trained networks with varied forms of letter training; however, none of these forms of letter specialization improved the match to human behavior. Thus, our findings reveal that it is not necessary to appeal to specialized letter representations to account for perceptual similarity of letters. Instead, we argue that it is more likely that the perception of letters depends on domain-general visual features. For over a century, scientists have conducted behavioral experiments to investigate how the visual system recognizes letters, but it has proven difficult to propose a model of the feature space underlying this capacity. Here we leveraged recent advances in machine learning to model a wide variety of features ranging from specialized letter features to general object-based features. Across two large-scale behavioral experiments we find that general object-based features account well for letter perception, and that adding letter specialization did not improve the correspondence to human behavior. It is plausible that the ability to recognize letters largely relies on general visual features unaltered by letter learning.
Collapse
Affiliation(s)
- Daniel Janini
- Department of Psychology, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail:
| | - Chris Hamblin
- Department of Psychology, Harvard University, Cambridge, Massachusetts, United States of America
| | - Arturo Deza
- Department of Psychology, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, Massachusetts, United States of America
| |
Collapse
|
8
|
Pegado F. Written Language Acquisition Is Both Shaped by and Has an Impact on Brain Functioning and Cognition. Front Hum Neurosci 2022; 16:819956. [PMID: 35754773 PMCID: PMC9226919 DOI: 10.3389/fnhum.2022.819956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 05/06/2022] [Indexed: 11/18/2022] Open
Abstract
Spoken language is a distinctive trace of our species and it is naturally acquired during infancy. Written language, in contrast, is artificial, and the correspondences between arbitrary visual symbols and the spoken language for reading and writing should be explicitly learned with external help. In this paper, I present several examples of how written language acquisition is both shaped by and has an impact on brain function and cognition. They show in one hand how our phylogenetic legacy influences education and on the other hand how ontogenetic needs for education can rapidly subdue deeply rooted neurocognitive mechanisms. The understanding of this bidirectional influences provides a more dynamic view of how plasticity interfaces phylogeny and ontogeny in human learning, with implications for both neurosciences and education.
Collapse
Affiliation(s)
- Felipe Pegado
- Aix-Marseille University, CNRS, LPC, Marseille, France
| |
Collapse
|
9
|
Dimension of visual information interacts with working memory in monkeys and humans. Sci Rep 2022; 12:5335. [PMID: 35351948 PMCID: PMC8964748 DOI: 10.1038/s41598-022-09367-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/16/2022] [Indexed: 12/16/2022] Open
Abstract
Humans demonstrate behavioural advantages (biases) towards particular dimensions (colour or shape of visual objects), but such biases are significantly altered in neuropsychological disorders. Recent studies have shown that lesions in the prefrontal cortex do not abolish dimensional biases, and therefore suggest that such biases might not depend on top-down prefrontal-mediated attention and instead emerge as bottom-up processing advantages. We hypothesised that if dimensional biases merely emerge from an enhancement of object features, the presence of visual objects would be necessary for the manifestation of dimensional biases. In a specifically-designed working memory task, in which macaque monkeys and humans performed matching based on the object memory rather than the actual object, we found significant dimensional biases in both species, which appeared as a shorter response time and higher accuracy in the preferred dimension (colour and shape dimension in humans and monkeys, respectively). Moreover, the mnemonic demands of the task influenced the magnitude of dimensional bias. Our findings in two primate species indicate that the dichotomy of top-down and bottom-up processing does not fully explain the emergence of dimensional biases. Instead, dimensional biases may emerge when processed information regarding visual object features interact with mnemonic and executive functions to guide goal-directed behaviour.
Collapse
|
10
|
Trapp R, Fernandez-Juricic E. How visual system configuration can play a role in individual recognition: a visual modeling study. Anim Cogn 2021; 25:205-216. [PMID: 34383151 DOI: 10.1007/s10071-021-01548-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 07/28/2021] [Accepted: 08/05/2021] [Indexed: 11/28/2022]
Abstract
Many species rely on individual recognition (i.e., the use of individual signals to identify and remember a conspecific) to tune their social interactions. However, little is known about how the configuration of the sensory system may affect the perception of individual recognition signals over space. Utilizing a visual modeling approach, we quantified (1) the threshold distance between the receiver and the signaler at which individual recognition can no longer accurately occur, and (2) the regions of the head most likely to contain the individual recognition signals. We used chickens (Gallus gallus) as our study species, as they use visual individual recognition and additionally have a well-studied visual system. We took pictures of different individuals and followed a visual modeling approach considering color vision, visual acuity, and pattern processing of the receiver. We found that distance degrades the quality of information in potential individual recognition signals. We estimated that the neighbor distance at which a receiver may have difficulty recognizing a conspecific was between 0.25 and 0.30 m in chickens, which may be related to a decrease in available features of the potential signal. This signal perception threshold closely matches the recognition distance predicted by previous behavioral approaches. Additionally, we found that certain regions of the head (beak, cheek, comb, eye) may be good candidates for individual recognition signals. Overall, our findings support that recognition in chickens occurs at short distances due to constraints imposed by their visual system, which can affect the costs and benefits associated with social spacing in groups.
Collapse
Affiliation(s)
- Rebecca Trapp
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA.
| | | |
Collapse
|
11
|
Mohammed ZA, Tejay GP. Examining the privacy paradox through individuals’ neural disposition in e-commerce: An exploratory neuroimaging study. Comput Secur 2021. [DOI: 10.1016/j.cose.2021.102201] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
12
|
Mansouri FA, Buckley MJ, Fehring DJ, Tanaka K. The Role of Primate Prefrontal Cortex in Bias and Shift Between Visual Dimensions. Cereb Cortex 2021; 30:85-99. [PMID: 31220222 PMCID: PMC7029686 DOI: 10.1093/cercor/bhz072] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 03/11/2019] [Accepted: 03/11/2019] [Indexed: 12/23/2022] Open
Abstract
Imaging and neural activity recording studies have shown activation in the primate prefrontal cortex when shifting attention between visual dimensions is necessary to achieve goals. A fundamental unanswered question is whether representations of these dimensions emerge from top-down attentional processes mediated by prefrontal regions or from bottom-up processes within visual cortical regions. We hypothesized a causative link between prefrontal cortical regions and dimension-based behavior. In large cohorts of humans and macaque monkeys, performing the same attention shifting task, we found that both species successfully shifted between visual dimensions, but both species also showed a significant behavioral advantage/bias to a particular dimension; however, these biases were in opposite directions in humans (bias to color) versus monkeys (bias to shape). Monkeys' bias remained after selective bilateral lesions within the anterior cingulate cortex (ACC), frontopolar cortex, dorsolateral prefrontal cortex (DLPFC), orbitofrontal cortex (OFC), or superior, lateral prefrontal cortex. However, lesions within certain regions (ACC, DLPFC, or OFC) impaired monkeys' ability to shift between these dimensions. We conclude that goal-directed processing of a particular dimension for the executive control of behavior depends on the integrity of prefrontal cortex; however, representation of competing dimensions and bias toward them does not depend on top-down prefrontal-mediated processes.
Collapse
Affiliation(s)
- Farshad A Mansouri
- Cognitive Neuroscience Laboratory, Department of Physiology, Monash Biomedicine Discovery Institute, Monash University, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University, Victoria, Australia
| | - Mark J Buckley
- Department of Experimental Psychology, Oxford University, Oxford, UK
| | - Daniel J Fehring
- Cognitive Neuroscience Laboratory, Department of Physiology, Monash Biomedicine Discovery Institute, Monash University, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain Function, Monash University, Victoria, Australia
| | - Keiji Tanaka
- Cognitive Brain Mapping Laboratory, RIKEN Center for Brain Science, Wako, Saitama, Japan
| |
Collapse
|
13
|
Dimensional bias and adaptive adjustments in inhibitory control of monkeys. Anim Cogn 2021; 24:815-828. [PMID: 33554317 DOI: 10.1007/s10071-021-01483-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 01/14/2021] [Accepted: 01/27/2021] [Indexed: 10/22/2022]
Abstract
Humans and macaque monkeys, performing a Wisconsin Card Sorting Test (WCST), show a significant behavioral bias to a particular sensory dimension (e.g. color or shape); however, lesions in prefrontal cortical regions do not abolish the dimensional biases in monkeys and, therefore, it has been proposed that these biases emerge in earlier stages of visual information processing. It remains unclear whether such dimensional biases are unique to the WCST, in which attention-shifting between dimensions are required, or affect other aspects of executive functions such as 'response inhibition' and 'error-induced behavioral adjustments'. To address this question, we trained six monkeys (Macaca mulatta) to perform a stop-signal task in which they had to inhibit their response when an instruction for inhibition was given by changing the color or shape of a visual stimulus. Stop Signal Reaction Time (SSRT) is an index of inhibitory processes. In all monkeys, SSRT was significantly shorter, and the probability of a successful inhibition was significantly higher, when a change in the shape dimension acted as the stop-cue. Humans show a response slowing following a failure in response inhibition and also adapt a proactive slowing after facing demands for response inhibition. We found such adaptive behavioral adjustments, with the same pattern, in monkeys' behavior; however, the dimensional bias did not modulate them. Our findings, showing dimensional bias in monkey, with the same pattern, in two different executive control tasks support the hypothesis that the bias to shape dimension emerges in early stages of visual information processing.
Collapse
|
14
|
Ghasemian S, Vardanjani MM, Sheibani V, Mansouri FA. Color-hierarchies in executive control of monkeys' behavior. Am J Primatol 2021; 83:e23231. [PMID: 33400335 DOI: 10.1002/ajp.23231] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Revised: 12/22/2020] [Accepted: 12/22/2020] [Indexed: 12/22/2022]
Abstract
Processing advantages for particular colors (color-hierarchies) influence emotional regulation and cognitive functions in humans and manifest as an advantage of the red color, compared with the green color, in triggering response inhibition but not in response execution. It remains unknown how such color-hierarchies emerge in human cognition and whether they are the unique properties of human brain with advanced trichromatic vision. Dominant models propose that color-hierarchies are formed as experience-dependent learning that associates various colors with different human-made conventions and concepts (e.g., traffic lights). We hypothesized that if color-hierarchies modulate cognitive functions in trichromatic nonhuman primates, it would indicate a preserved neurobiological basis for such color-hierarchies. We trained six macaque monkeys to perform cognitive tasks that required behavioral control based on colored cues. Color-hierarchies significantly influenced monkeys' behavior and appeared as an advantage of the red color, compared to the green, in triggering response inhibition but not response execution. For all monkeys, the order of color-hierarchies, in response inhibition and also execution, was similar to that in humans. In addition, the cognitive effects of color-hierarchies were not limited to the trial in which the colored cues were encountered but also persisted in the following trials in which there was no colored cue on the visual scene. These findings suggest that color-hierarchies are not resulting from association of colors with human-made conventions and that simple processing advantage in retina or early visual pathways does not explain the cognitive effects of color-hierarchies. The discovery of color-hierarchies in cognitive repertoire of monkeys indicates that although the evolution of humans and monkeys diverged in about 25 million years ago, the color-hierarchies are evolutionary preserved, with the same order, in trichromatic primates and exert overarching effects on the executive control of behavior.
Collapse
Affiliation(s)
- Sadegh Ghasemian
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran.,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Marzieh M Vardanjani
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran.,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Vahid Sheibani
- Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran.,Cognitive Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
| | - Farshad A Mansouri
- ARC Centre of Excellence for Integrative Brain Function, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
15
|
Topographic Mapping as a Basic Principle of Functional Organization for Visual and Prefrontal Functional Connectivity. eNeuro 2020; 7:ENEURO.0532-19.2019. [PMID: 31988218 PMCID: PMC7029189 DOI: 10.1523/eneuro.0532-19.2019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 12/18/2019] [Indexed: 02/06/2023] Open
Abstract
The organization of region-to-region functional connectivity has major implications for understanding information transfer and transformation between brain regions. We extended connective field mapping methodology to 3-D anatomic space to derive estimates of corticocortical functional organization. Using multiple publicly available human (both male and female) resting-state fMRI data samples for model testing and replication analysis, we have three main findings. First, we found that the functional connectivity between early visual regions maintained a topographic relationship along the anterior-posterior dimension, which corroborates previous research. Higher order visual regions showed a pattern of connectivity that supports convergence and biased sampling, which has implications for their receptive field properties. Second, we demonstrated that topographic organization is a fundamental aspect of functional connectivity across the entire cortex, with higher topographic connectivity between regions within a functional network than across networks. The principle gradient of topographic connectivity across the cortex resembled whole-brain gradients found in previous work. Last but not least, we showed that the organization of higher order regions such as the lateral prefrontal cortex demonstrate functional gradients of topographic connectivity and convergence. These organizational features of the lateral prefrontal cortex predict task-based activation patterns, particularly visual specialization and higher order rules. In sum, these findings suggest that topographic input is a fundamental motif of functional connectivity between cortical regions for information processing and transfer, with maintenance of topography potentially important for preserving the integrity of information from one region to another.
Collapse
|
16
|
Grossberg S. The resonant brain: How attentive conscious seeing regulates action sequences that interact with attentive cognitive learning, recognition, and prediction. Atten Percept Psychophys 2019; 81:2237-2264. [PMID: 31218601 PMCID: PMC6848053 DOI: 10.3758/s13414-019-01789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface-shroud resonances support conscious seeing and action, whereas feature-category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure-ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive-emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Room 213, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Boston, MA, 02215, USA.
| |
Collapse
|
17
|
Yu CP, Liu H, Samaras D, Zelinsky GJ. Modelling attention control using a convolutional neural network designed after the ventral visual pathway. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1661927] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Chen-Ping Yu
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Huidong Liu
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Dimitrios Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Gregory J. Zelinsky
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
18
|
Chen X, Li X, Yan T, Dong Q, Mao Z, Wang Y, Yang N, Zhang Q, Zhao W, Zhai J, Chen M, Du B, Deng X, Ji F, Xiang YT, Song J, Wu H, Dong Q, Chen C, Wang C, Li J. Network functional connectivity analysis in individuals at ultrahigh risk for psychosis and patients with schizophrenia. Psychiatry Res Neuroimaging 2019; 290:51-57. [PMID: 31288150 DOI: 10.1016/j.pscychresns.2019.06.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 06/19/2019] [Accepted: 06/21/2019] [Indexed: 02/07/2023]
Abstract
Schizophrenia is a severe mental disorder, and the onset of which is preceded by a stage of ultrahigh risk (UHR) for developing psychosis. Therefore, analyzing individuals with UHR is essential for identifying predictive biomarkers for the onset of schizophrenia. The current study aimed to identify such biomarkers based on a voxelwise whole-brain functional degree centrality (FDC) analysis. Conjunction analysis showed that, compared with healthy controls, both UHR subjects and patients with schizophrenia showed significantly increased FDC at the medial prefrontal cortex (MPFC) and significantly decreased FDC at the right fusiform gyrus (FG). The subsequent partial correlation analysis showed significant correlations between the disorganization symptoms and FDCs at the MPFC and the right FG for both UHR subjects and patients with schizophrenia. These findings suggest that FDC within the MPFC and the right FG could be candidate biomarkers for the onset of schizophrenia.
Collapse
Affiliation(s)
- Xiongying Chen
- The National Clinical Research Center for Mental Disorders & Beijing Key Laboratory of Mental Disorders & the Advanced Innovation Center for Human Brain Protection, Beijing Anding Hospital, School of Mental Health, Capital Medical University, Beijing, China
| | - Xianbin Li
- The National Clinical Research Center for Mental Disorders & Beijing Key Laboratory of Mental Disorders & the Advanced Innovation Center for Human Brain Protection, Beijing Anding Hospital, School of Mental Health, Capital Medical University, Beijing, China
| | - Tongjun Yan
- The PLA 102(nd) Hospital and Mental Health Center of Military. Changzhou 213003, PR China
| | - Qianhong Dong
- The National Clinical Research Center for Mental Disorders & Beijing Key Laboratory of Mental Disorders & the Advanced Innovation Center for Human Brain Protection, Beijing Anding Hospital, School of Mental Health, Capital Medical University, Beijing, China
| | - Zhen Mao
- The National Clinical Research Center for Mental Disorders & Beijing Key Laboratory of Mental Disorders & the Advanced Innovation Center for Human Brain Protection, Beijing Anding Hospital, School of Mental Health, Capital Medical University, Beijing, China
| | - Yanyan Wang
- The PLA 102(nd) Hospital and Mental Health Center of Military. Changzhou 213003, PR China
| | - Ningbo Yang
- The National Clinical Research Center for Mental Disorders & Beijing Key Laboratory of Mental Disorders & the Advanced Innovation Center for Human Brain Protection, Beijing Anding Hospital, School of Mental Health, Capital Medical University, Beijing, China; First Affiliated Hospital of Henan University of Science and Technology, No.24 Jinghua Road, Jianxi District, Luoyang 471003, China
| | - Qiumei Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China; School of Mental Health, Jining Medical University, 45# Jianshe South Road, Jining 272013, Shandong Province, PR China
| | - Wan Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Jinguo Zhai
- School of Mental Health, Jining Medical University, 45# Jianshe South Road, Jining 272013, Shandong Province, PR China
| | - Min Chen
- School of Mental Health, Jining Medical University, 45# Jianshe South Road, Jining 272013, Shandong Province, PR China
| | - Boqi Du
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Xiaoxiang Deng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Feng Ji
- School of Mental Health, Jining Medical University, 45# Jianshe South Road, Jining 272013, Shandong Province, PR China
| | - Yu-Tao Xiang
- Faculty of Health Sciences, University of Macau, Avenida da Universidade, Taipa, Macau, PR China
| | - Jie Song
- Shengli Hospital of Shengli Petroleum Administration Bureau, Dongying 257022, Shandong province, PR China
| | - Hongjie Wu
- Shengli Hospital of Shengli Petroleum Administration Bureau, Dongying 257022, Shandong province, PR China
| | - Qi Dong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Chuansheng Chen
- Department of Psychology and Social Behavior, University of California, Irvine, CA 92697, United States
| | - Chuanyue Wang
- The National Clinical Research Center for Mental Disorders & Beijing Key Laboratory of Mental Disorders & the Advanced Innovation Center for Human Brain Protection, Beijing Anding Hospital, School of Mental Health, Capital Medical University, Beijing, China.
| | - Jun Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China.
| |
Collapse
|
19
|
Grossberg S. The Embodied Brain of SOVEREIGN2: From Space-Variant Conscious Percepts During Visual Search and Navigation to Learning Invariant Object Categories and Cognitive-Emotional Plans for Acquiring Valued Goals. Front Comput Neurosci 2019; 13:36. [PMID: 31333437 PMCID: PMC6620614 DOI: 10.3389/fncom.2019.00036] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 05/21/2019] [Indexed: 11/13/2022] Open
Abstract
This article develops a model of how reactive and planned behaviors interact in real time. Controllers for both animals and animats need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once an environment becomes familiar. The SOVEREIGN model embodied these capabilities, and was tested in a 3D virtual reality environment. Neural models have characterized important adaptive and intelligent processes that were not included in SOVEREIGN. A major research program is summarized herein by which to consistently incorporate them into an enhanced model called SOVEREIGN2. Key new perceptual, cognitive, cognitive-emotional, and navigational processes require feedback networks which regulate resonant brain states that support conscious experiences of seeing, feeling, and knowing. Also included are computationally complementary processes of the mammalian neocortical What and Where processing streams, and homologous mechanisms for spatial navigation and arm movement control. These include: Unpredictably moving targets are tracked using coordinated smooth pursuit and saccadic movements. Estimates of target and present position are computed in the Where stream, and can activate approach movements. Motion cues can elicit orienting movements to bring new targets into view. Cumulative movement estimates are derived from visual and vestibular cues. Arbitrary navigational routes are incrementally learned as a labeled graph of angles turned and distances traveled between turns. Noisy and incomplete visual sensor data are transformed into representations of visual form and motion. Invariant recognition categories are learned in the What stream. Sequences of invariant object categories are stored in a cognitive working memory, whereas sequences of movement positions and directions are stored in a spatial working memory. Stored sequences trigger learning of cognitive and spatial/motor sequence categories or plans, also called list chunks, which control planned decisions and movements toward valued goal objects. Predictively successful list chunk combinations are selectively enhanced or suppressed via reinforcement learning and incentive motivational learning. Expected vs. unexpected event disconfirmations regulate these enhancement and suppressive processes. Adaptively timed learning enables attention and action to match task constraints. Social cognitive joint attention enables imitation learning of skills by learners who observe teachers from different spatial vantage points.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
| |
Collapse
|
20
|
Abstract
This chapter reviews literature on development of visual-spatial attention. A brief overview of brain mechanisms of visual perception is provided, followed by discussion of neural maturation in the prenatal period, infancy, and childhood. This is followed by sections on gaze control, eye movement systems, and orienting. The chapter concludes with consideration of development of space, objects, and scenes. Visual-spatial attention reflects an intricate set of motor, perceptual, and cognitive systems that work jointly and all develop in tandem.
Collapse
|
21
|
Vaessen MJ, Abassi E, Mancini M, Camurri A, de Gelder B. Computational Feature Analysis of Body Movements Reveals Hierarchical Brain Organization. Cereb Cortex 2018; 29:3551-3560. [DOI: 10.1093/cercor/bhy228] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Revised: 08/20/2018] [Accepted: 08/21/2018] [Indexed: 11/13/2022] Open
Abstract
Abstract
Social species spend considerable time observing the body movements of others to understand their actions, predict their emotions, watch their games, or enjoy their dance movements. Given the important information obtained from body movements, we still know surprisingly little about the details of brain mechanisms underlying movement perception. In this fMRI study, we investigated the relations between movement features obtained from automated computational analyses of video clips and the corresponding brain activity. Our results show that low-level computational features map to specific brain areas related to early visual- and motion-sensitive regions, while mid-level computational features are related to dynamic aspects of posture encoded in occipital–temporal cortex, posterior superior temporal sulcus and superior parietal lobe. Furthermore, behavioral features obtained from subjective ratings correlated with activity in higher action observation regions. Our computational feature-based analysis suggests that the neural mechanism of movement encoding is organized in the brain not so much by semantic categories than by feature statistics of the body movements.
Collapse
Affiliation(s)
- Maarten J Vaessen
- Department of Cognitive Neuroscience, Brain and Emotion Laboratory, Faculty of Psychology and Neuroscience, Maastricht University, EV Maastricht, the Netherlands
| | - Etienne Abassi
- Department of Cognitive Neuroscience, Brain and Emotion Laboratory, Faculty of Psychology and Neuroscience, Maastricht University, EV Maastricht, the Netherlands
| | - Maurizio Mancini
- Department of Informatics, Casa Paganini-InfoMus Research Centre, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy
| | - Antonio Camurri
- Department of Informatics, Casa Paganini-InfoMus Research Centre, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Brain and Emotion Laboratory, Faculty of Psychology and Neuroscience, Maastricht University, EV Maastricht, the Netherlands
- Department of Computer Science, University College London, London, England, United Kingdom
| |
Collapse
|
22
|
Dietrich S, Hertrich I, Müller-Dahlhaus F, Ackermann H, Belardinelli P, Desideri D, Seibold VC, Ziemann U. Reduced Performance During a Sentence Repetition Task by Continuous Theta-Burst Magnetic Stimulation of the Pre-supplementary Motor Area. Front Neurosci 2018; 12:361. [PMID: 29896086 PMCID: PMC5987029 DOI: 10.3389/fnins.2018.00361] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 05/09/2018] [Indexed: 11/23/2022] Open
Abstract
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient “virtual lesion” using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message.
Collapse
Affiliation(s)
- Susanne Dietrich
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.,Department of Psychology, Evolutionary Cognition, University of Tübingen, Tübingen, Germany
| | - Ingo Hertrich
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Florian Müller-Dahlhaus
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.,Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University, University of Mainz, Mainz, Germany
| | - Hermann Ackermann
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Paolo Belardinelli
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Debora Desideri
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Verena C Seibold
- Department of Psychology, Evolutionary Cognition, University of Tübingen, Tübingen, Germany
| | - Ulf Ziemann
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
23
|
Spivey MJ, Batzloff BJ. Bridgemanian space constancy as a precursor to extended cognition. Conscious Cogn 2018; 64:164-175. [PMID: 29709438 DOI: 10.1016/j.concog.2018.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 03/23/2018] [Accepted: 04/02/2018] [Indexed: 11/30/2022]
Abstract
A few decades ago, cognitive psychologists generally took for granted that the reason we perceive our visual environment as one contiguous stable whole (i.e., space constancy) is because we have an internal mental representation of the visual environment as one contiguous stable whole. They supposed that the non-contiguous visual images that are gathered during the brief fixations that intervene between pairs of saccadic eye movements (a few times every second) are somehow stitched together to construct this contiguous internal mental representation. Determining how exactly the brain does this proved to be a vexing puzzle for vision researchers. Bruce Bridgeman's research career is the story of how meticulous psychophysical experimentation, and a genius theoretical insight, eventually solved this puzzle. The reason that it was so difficult for researchers to figure out how the brain stitches together these visual snapshots into one accurately-rendered mental representation of the visual environment is that it doesn't do that. Bruce discovered that the brain couldn't do that if it tried. The neural information that codes for saccade amplitude and direction is simply too inaccurate to determine exact relative locations of each fixation. Rather than the perception of space constancy being the result of an internal representation, Bruce determined that it is the result of a brain that simply assumes that external space remains constant, and it rarely checks to verify this assumption. In our extension of Bridgeman's formulation, we suggest that objects in the world often serve as their own representations, and cognitive operations can be performed on those objects themselves, rather than on mental representations of them.
Collapse
Affiliation(s)
- Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced, United States.
| | - Brandon J Batzloff
- Cognitive and Information Sciences, University of California, Merced, United States
| |
Collapse
|
24
|
Incremental change in the set of coactive cortical assemblies enables mental continuity. Physiol Behav 2016; 167:222-237. [PMID: 27660035 DOI: 10.1016/j.physbeh.2016.09.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 09/09/2016] [Accepted: 09/17/2016] [Indexed: 01/07/2023]
Abstract
This opinion article explores how sustained neural firing in association areas allows high-order mental representations to be coactivated over multiple perception-action cycles, permitting sequential mental states to share overlapping content and thus be recursively interrelated. The term "state-spanning coactivity" (SSC) is introduced to refer to neural nodes that remain coactive as a group over a given period of time. SSC ensures that contextual groupings of goal or motor-relevant representations will demonstrate continuous activity over a delay period. It also allows potentially related representations to accumulate and coactivate despite delays between their initial appearances. The nodes that demonstrate SSC are a subset of the active representations from the previous state, and can act as referents to which newly introduced representations of succeeding states relate. Coactive nodes pool their spreading activity, converging on and activating new nodes, adding these to the remaining nodes from the previous state. Thus, the overall distribution of coactive nodes in cortical networks evolves gradually during contextual updating. The term "incremental change in state-spanning coactivity" (icSSC) is introduced to refer to this gradual evolution. Because a number of associated representations can be sustained continuously, each brain state is embedded recursively in the previous state, amounting to an iterative process that can implement learned algorithms to progress toward a complex result. The longer representations are sustained, the more successive mental states can share related content, exhibit progressive qualities, implement complex algorithms, and carry thematic or narrative continuity. Included is a discussion of the implications that SSC and icSSC may have for understanding working memory, defining consciousness, and constructing AI architectures.
Collapse
|
25
|
Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance. J Neurosci 2015; 35:13402-18. [PMID: 26424887 DOI: 10.1523/jneurosci.5181-14.2015] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.
Collapse
|
26
|
Koenig-Robert R, VanRullen R, Tsuchiya N. Semantic Wavelet-Induced Frequency-Tagging (SWIFT) Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas. PLoS One 2015; 10:e0144858. [PMID: 26691722 PMCID: PMC4686956 DOI: 10.1371/journal.pone.0144858] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 11/23/2015] [Indexed: 11/19/2022] Open
Abstract
Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging), a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI), we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.
Collapse
Affiliation(s)
- Roger Koenig-Robert
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, Australia
- * E-mail: (RK); (NT)
| | - Rufin VanRullen
- CNRS, UMR5549, Centre de Recherche Cerveau et Cognition, Faculté de Médecine de Purpan, 31052 Toulouse, France
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, 31052 Toulouse, France
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, Australia
- Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo, Japan, 102–8266
- * E-mail: (RK); (NT)
| |
Collapse
|
27
|
Ramakrishnan K, Scholte HS, Groen IIA, Smeulders AWM, Ghebreab S. Visual dictionaries as intermediate features in the human brain. Front Comput Neurosci 2015; 8:168. [PMID: 25642183 PMCID: PMC4295527 DOI: 10.3389/fncom.2014.00168] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 12/05/2014] [Indexed: 11/13/2022] Open
Abstract
The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2, and V3. However, BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.
Collapse
Affiliation(s)
- Kandan Ramakrishnan
- Intelligent Systems Lab Amsterdam, Institute of Informatics, University of Amsterdam Amsterdam, Netherlands
| | - H Steven Scholte
- Cognitive Neuroscience Group, Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - Iris I A Groen
- Cognitive Neuroscience Group, Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - Arnold W M Smeulders
- Intelligent Systems Lab Amsterdam, Institute of Informatics, University of Amsterdam Amsterdam, Netherlands
| | - Sennay Ghebreab
- Intelligent Systems Lab Amsterdam, Institute of Informatics, University of Amsterdam Amsterdam, Netherlands ; Cognitive Neuroscience Group, Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| |
Collapse
|
28
|
Csete G, Bognár A, Csibri P, Kaposvári P, Sáry G. Aging alters visual processing of objects and shapes in inferotemporal cortex in monkeys. Brain Res Bull 2014; 110:76-83. [PMID: 25526896 DOI: 10.1016/j.brainresbull.2014.11.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Revised: 11/21/2014] [Accepted: 11/22/2014] [Indexed: 12/14/2022]
Abstract
Visual perception declines with age. Perceptual deficits may originate not only in the optical system serving vision but also in the neural machinery processing visual information. Since homologies between monkey and human vision permit extrapolation from monkeys to humans, data from young, middle aged and old monkeys were analyzed to show age-related changes in the neuronal activity in the inferotemporal cortex, which is critical for object and shape vision. We found an increased neuronal response latency, and a decrease in the stimulus selectivity in the older animals and suggest that these changes may underlie the perceptual uncertainties found frequently in the elderly.
Collapse
Affiliation(s)
- G Csete
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, H-6720 Szeged, Hungary; Department of Neurology, Faculty of Medicine, Semmelweis u. 6, H-6725 Szeged, Hungary.
| | - A Bognár
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, H-6720 Szeged, Hungary.
| | - P Csibri
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, H-6720 Szeged, Hungary.
| | - P Kaposvári
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, H-6720 Szeged, Hungary.
| | - Gy Sáry
- Department of Physiology, Faculty of Medicine, University of Szeged, Dóm tér 10, H-6720 Szeged, Hungary.
| |
Collapse
|
29
|
Unno S, Handa T, Nagasaka Y, Inoue M, Mikami A. Modulation of neuronal activity with cue-invariant shape discrimination in the primate superior temporal sulcus. Neuroscience 2014; 268:221-35. [PMID: 24674847 DOI: 10.1016/j.neuroscience.2014.03.024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2013] [Revised: 03/13/2014] [Accepted: 03/13/2014] [Indexed: 11/15/2022]
Abstract
Shape perception can be achieved based on various cues such as luminance, color, texture, depth and motion. To investigate common neural mechanisms underlying shape perception cued by various visual attributes, we examined single-neuron activity in the monkey anterior superior temporal sulcus (STS) in response to shapes defined by luminance and motion cues during shape discrimination. We found cortical mapping with respect to selectivity for shapes as well as for direction of motion in the STS. About 90% of shape-selective neurons were located in the lower bank of STS (lSTS) assigned to the ventral pathway, while about 80% of direction-selective neurons existed in the upper bank of STS (uSTS) assigned to the dorsal pathway. The neurons showing selectivity for both shape and motion coexisted in lSTS as well as uSTS. This result indicates that integration or convergence of shape information and motion information can occur in both banks of STS. About 90% of STS neurons showing selectivity both for shapes defined by luminance cue and for shapes defined by motion cue were located in lSTS. They showed a highly similar shape preference between the different visual attributes, indicating cue-invariant shape selectivity. The cue-invariant shape-selectivity was modulated with target selection as well as with discrimination performance of monkeys. These results suggest that lSTS could be involved in cue-invariant shape discrimination, but not the uSTS.
Collapse
Affiliation(s)
- S Unno
- Department of Behavioral and Brain Sciences, Primate Research Institute, Kyoto University, Kanrin, Inuyama, Aichi, Japan
| | - T Handa
- Department of Behavioral and Brain Sciences, Primate Research Institute, Kyoto University, Kanrin, Inuyama, Aichi, Japan
| | - Y Nagasaka
- Department of Psychology, Rikkyo University, Toshimaku, Tokyo, Japan
| | - M Inoue
- Department of Behavioral and Brain Sciences, Primate Research Institute, Kyoto University, Kanrin, Inuyama, Aichi, Japan
| | - A Mikami
- Department of Behavioral and Brain Sciences, Primate Research Institute, Kyoto University, Kanrin, Inuyama, Aichi, Japan.
| |
Collapse
|
30
|
Rubin RD, Chesney SA, Cohen NJ, Gonsalves BD. Using fMR-adaptation to track complex object representations in perirhinal cortex. Cogn Neurosci 2014; 4:107-14. [PMID: 23997832 DOI: 10.1080/17588928.2013.787056] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Brain regions in medial temporal lobe have seen a shift in emphasis in their role in long-term declarative memory to an appreciation of their role in cognitive domains beyond declarative memory, such as implicit memory, working memory, and perception. Recent theoretical accounts emphasize the function of perirhinal cortex in terms of its role in the ventral visual stream. Here, we used functional magnetic resonance adaptation (fMRa) to show that brain structures in the visual processing stream can bind item features prior to the involvement of hippocampal binding mechanisms. Evidence for perceptual binding was assessed by comparing BOLD (blood-oxygen-level-dependent) responses between fused objects and variants of the same object as different, non-fused forms (e.g., physically separate objects). Adaptation of the neural response to fused, but not non-fused, objects was in left fusiform cortex and left perirhinal cortex, indicating the involvement of these regions in the perceptual binding of item representations.
Collapse
|
31
|
Raos V, Kilintari M, Savaki HE. Viewing a forelimb induces widespread cortical activations. Neuroimage 2014; 89:122-42. [DOI: 10.1016/j.neuroimage.2013.12.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Revised: 11/06/2013] [Accepted: 12/08/2013] [Indexed: 10/25/2022] Open
|
32
|
Barense MD, Erez J, Ma H, Cusack R. Resources required for processing ambiguous complex features in vision and audition are modality specific. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2014; 14:336-353. [PMID: 24022792 DOI: 10.3758/s13415-013-0207-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.
Collapse
|
33
|
Abstract
AbstractThe dissociation of a figure from its background is an essential feat of visual perception, as it allows us to detect, recognize, and interact with shapes and objects in our environment. In order to understand how the human brain gives rise to the perception of figures, we here review experiments that explore the links between activity in visual cortex and performance of perceptual tasks related to figure perception. We organize our review according to a proposed model that attempts to contextualize figure processing within the more general framework of object processing in the brain. Overall, the current literature provides us with individual linking hypotheses as to cortical regions that are necessary for particular tasks related to figure perception. Attempts to reach a more complete understanding of how the brain instantiates figure and object perception, however, will have to consider the temporal interaction between the many regions involved, the details of which may vary widely across different tasks.
Collapse
|
34
|
Hassler U, Friese U, Martens U, Trujillo-Barreto N, Gruber T. Repetition priming effects dissociate between miniature eye movements and induced gamma-band responses in the human electroencephalogram. Eur J Neurosci 2013; 38:2425-33. [DOI: 10.1111/ejn.12244] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2012] [Revised: 03/18/2013] [Accepted: 04/03/2013] [Indexed: 11/28/2022]
Affiliation(s)
- Uwe Hassler
- Institute of Psychology; Osnabrück University; Seminarstrasse 20 49074 Osnabrück Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology; University Medical Center Hamburg-Eppendorf; Hamburg Germany
| | - Ulla Martens
- Institute of Psychology; Osnabrück University; Seminarstrasse 20 49074 Osnabrück Germany
| | | | - Thomas Gruber
- Institute of Psychology; Osnabrück University; Seminarstrasse 20 49074 Osnabrück Germany
| |
Collapse
|
35
|
Yong KXX, Warren JD, Warrington EK, Crutch SJ. Intact reading in patients with profound early visual dysfunction. Cortex 2013; 49:2294-306. [PMID: 23578749 PMCID: PMC3902200 DOI: 10.1016/j.cortex.2013.01.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2012] [Revised: 01/09/2013] [Accepted: 01/11/2013] [Indexed: 11/21/2022]
Abstract
Despite substantial neuroscientific evidence for a region of visual cortex dedicated to the processing of written words, many studies continue to reject explanations of letter-by-letter (LBL) reading in terms of impaired word form representations or parallel letter processing in favour of more general deficits of visual function. In the current paper, we demonstrate that whilst LBL reading is often associated with general visual deficits, these deficits are not necessarily sufficient to cause reading impairment and have led to accounts of LBL reading which are based largely on evidence of association rather than causation. We describe two patients with posterior cortical atrophy (PCA) who exhibit remarkably preserved whole word and letter reading despite profound visual dysfunction. Relative to controls, both patients demonstrated impaired performance on tests of early visual, visuoperceptual and visuospatial processing; visual acuity was the only skill preserved in both individuals. By contrast, both patients were able to read aloud words with perfect to near-perfect accuracy. Reading performance was also rapid with no overall significant difference in response latencies relative to age- and education-matched controls. Furthermore, the patients violated a key prediction of general visual accounts of LBL reading – that pre-lexical impairments should result in prominent word length effects; in the two reported patients, evidence for abnormal word length effects was equivocal or absent, and certainly an order of magnitude different to that reported for LBL readers. We argue that general visual accounts cannot explain the pattern of reading data reported, and attribute the preserved reading performance to preserved direct access to intact word form representations and/or parallel letter processing mechanisms. The current data emphasise the need for much clearer evidence of causality when attempting to draw connections between specific aspects of visual processing and different types of acquired peripheral dyslexia.
Collapse
Affiliation(s)
- Keir X X Yong
- Dementia Research Centre, Department of Neurodegeneration, UCL Institute of Neurology, University College London, UK.
| | | | | | | |
Collapse
|
36
|
Salo E, Rinne T, Salonen O, Alho K. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks. Brain Res 2013; 1496:55-69. [DOI: 10.1016/j.brainres.2012.12.013] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Revised: 12/03/2012] [Accepted: 12/08/2012] [Indexed: 11/24/2022]
|
37
|
Spelke ES, Lee SA. Core systems of geometry in animal minds. Philos Trans R Soc Lond B Biol Sci 2013; 367:2784-93. [PMID: 22927577 DOI: 10.1098/rstb.2012.0210] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Research on humans from birth to maturity converges with research on diverse animals to reveal foundational cognitive systems in human and animal minds. The present article focuses on two such systems of geometry. One system represents places in the navigable environment by recording the distance and direction of the navigator from surrounding, extended surfaces. The other system represents objects by detecting the shapes of small-scale forms. These two systems show common signatures across animals, suggesting that they evolved in distant ancestral species. As children master symbolic systems such as maps and language, they come productively to combine representations from the two core systems of geometry in uniquely human ways; these combinations may give rise to abstract geometric intuitions. Studies of the ontogenetic and phylogenetic sources of abstract geometry therefore are illuminating of both human and animal cognition. Research on animals brings simpler model systems and richer empirical methods to bear on the analysis of abstract concepts in human minds. In return, research on humans, relating core cognitive capacities to symbolic abilities, sheds light on the content of representations in animal minds.
Collapse
Affiliation(s)
- Elizabeth S Spelke
- Department of Psychology, Harvard University, 1130 William James Hall, 33 Kirkland Street, Cambridge, MA 02138, USA.
| | | |
Collapse
|
38
|
Abstract
In monkeys, a number of different neocortical as well as limbic structures have cell populations that respond preferentially to face stimuli. Face selectivity is also differentiated within itself: Cells in the inferior temporal and prefrontal cortex tend to respond to facial identity, others in the upper bank of the superior temporal sulcus to gaze directions, and yet another population in the amygdala to facial expression. The great majority of these cells are sensitive to the entire configuration of a face. Changing the spatial arrangement of the facial features greatly diminishes the neurons' response. It would appear, then, that an entire neural network for faces exists which contains units highly selective to complex configurations and that respond to different aspects of the object "face." Given the vital importance of face recognition in primates, this may not come as a surprise. But are faces the only objects represented in this way? Behavioural work in humans suggests that nonface objects may be processed like faces if subjects are required to discriminate between visually similar exemplars and acquire sufficient expertise in doing so. Recent neuroimaging studies in humans indicate that level of categorisation and expertise interact to produce the specialisation for faces in the middle fusiform gyrus. Here we discuss some new evidence in the monkey suggesting that any arbitrary homogeneous class of artificial objects-which the animal has to individually learn, remember, and recognise again and again from among a large number of distractors sharing a number of common features with the target-can induce configurational selectivity in the response of neurons in the visual system. For all of the animals tested, the neurons from which we recorded were located in the anterior inferotemporal cortex. However, as we have only recorded from the posterior and anterior ventrolateral temporal lobe, other cells with a similar selectivity for the same objects may also exist in areas of the medial temporal lobe or in the limbic structures of the same "expert" monkeys. It seems that the encoding scheme used for faces may also be employed for other classes with similar properties. Thus, regarding their neural encoding, faces are not "special" but rather the "default special" class in the primate recognition system.
Collapse
|
39
|
Nasr S, Tootell RBH. Role of fusiform and anterior temporal cortical areas in facial recognition. Neuroimage 2012; 63:1743-53. [PMID: 23034518 DOI: 10.1016/j.neuroimage.2012.08.031] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2012] [Revised: 08/06/2012] [Accepted: 08/13/2012] [Indexed: 11/19/2022] Open
Abstract
Recent fMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus ('AT'; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex.
Collapse
Affiliation(s)
- Shahin Nasr
- Athinioula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St., Charlestown, MA 02129, USA.
| | | |
Collapse
|
40
|
Taylor KI, Devereux BJ, Tyler LK. Conceptual structure: Towards an integrated neuro-cognitive account. ACTA ACUST UNITED AC 2011; 26:1368-1401. [PMID: 23750064 DOI: 10.1080/01690965.2011.568227] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
How are the meanings of concepts represented and processed? We present a cognitive model of conceptual representations and processing - the Conceptual Structure Account (CSA; Tyler & Moss, 2001) - as an example of a distributed, feature-based approach. In a first section, we describe the CSA and evaluate relevant neuropsychological and experimental behavioral data. We discuss studies using linguistic and non-linguistic stimuli, which are both presumed to access the same conceptual system. We then take the CSA as a framework for hypothesising how conceptual knowledge is represented and processed in the brain. This neuro-cognitive approach attempts to integrate the distributed feature-based characteristics of the CSA with a distributed and feature-based model of sensory object processing. Based on a review of relevant functional imaging and neuropsychological data, we argue that distributed accounts of feature-based representations have considerable explanatory power, and that a cognitive model of conceptual representations is needed to understand their neural bases.
Collapse
Affiliation(s)
- K I Taylor
- Centre for Speech, Language and the Brain, University of Cambridge, Downing Street, Cambridge CB2 3EB, U.K. ; Memory Clinic - Neuropsychology Center, University Hospital Basel, Schanzenstrasse 55, 4031 Basel, Switzerland
| | | | | |
Collapse
|
41
|
Striem-Amit E, Dakwar O, Reich L, Amedi A. The large-Scale Organization of “Visual” Streams Emerges Without Visual Experience. Cereb Cortex 2011; 22:1698-709. [DOI: 10.1093/cercor/bhr253] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
|
42
|
Brants M, Baeck A, Wagemans J, de Beeck HPO. Multiple scales of organization for object selectivity in ventral visual cortex. Neuroimage 2011; 56:1372-81. [PMID: 21376816 DOI: 10.1016/j.neuroimage.2011.02.079] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2010] [Revised: 02/07/2011] [Accepted: 02/27/2011] [Indexed: 11/30/2022] Open
Abstract
Object knowledge is hierarchical. Several hypotheses have proposed that this property might be reflected in the spatial organization of ventral visual cortex. For example, all exemplars of a category might activate the same patches of cortex, but with a slightly different position of the peak of activation in each patch. According to this view, category selectivity would be organized at a larger spatial scale compared to exemplar selectivity. No empirical evidence for such proposals is available from experiments with human subjects. Here, we compare the relative scale of organization for category and exemplar selectivity in two datasets with two methods: (i) by investigating the previously reported beneficial effect of spatial smoothing of the fMRI data on the reliability of multi-voxel selectivity patterns; and (ii) by comparing the relative weight of lower and higher spatial frequencies in the spatial frequency spectrum of these selectivity patterns. The findings are consistent with the proposal that selectivity for stimulus properties that underlie finer distinctions between objects is organized at a finer scale than selectivity for stimulus properties that differentiate categories. This finding confirms the existence of multiple scales of organization in the ventral visual pathway.
Collapse
Affiliation(s)
- Marijke Brants
- Laboratory of Biological Psychology, University of Leuven (K.U.Leuven), Leuven, Belgium
| | | | | | | |
Collapse
|
43
|
Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues. Cognit Comput 2011; 3:146-166. [PMID: 21475682 PMCID: PMC3059758 DOI: 10.1007/s12559-010-9092-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2010] [Accepted: 12/26/2010] [Indexed: 10/29/2022]
Abstract
In this contribution, we present a large-scale hierarchical system for object detection fusing bottom-up (signal-driven) processing results with top-down (model or task-driven) attentional modulation. Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional modulation signals. Our system implements bi-directional data flow in a processing hierarchy. The bottom-up data flow proceeds from a preprocessing level to the hypothesis level where object hypotheses created by exhaustive object detection algorithms are represented in a roughly retinotopic way. A competitive selection mechanism is used to determine the most confident hypotheses, which are used on the system level to train multimodal models that link object identity to invariant hypothesis properties. The top-down data flow originates at the system level, where the trained multimodal models are used to obtain space- and feature-based attentional modulation signals, providing biases for the competitive selection process at the hypothesis level. This results in object-specific hypothesis facilitation/suppression in certain image regions which we show to be applicable to different object detection mechanisms. In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos. Evaluating our approach on a publicly available dataset containing approximately 3,500 annotated video images from more than 1 h of driving, we can show strong increases in performance and generalization when compared to object detection in isolation. Furthermore, we compare our results to a late hypothesis rejection approach, showing that early coupling of top-down and bottom-up information is a favorable approach especially when processing resources are constrained.
Collapse
|
44
|
Haynes JD. Beyond Libet: Long-Term Prediction of Free Choices from Neuroimaging Signals. RESEARCH AND PERSPECTIVES IN NEUROSCIENCES 2011. [DOI: 10.1007/978-3-642-18015-6_10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
45
|
Affiliation(s)
- Chang Hong Liu
- a Department of Psychology, McGill University, Quebec, Canada
| | - Avi Chaudhuri
- a Department of Psychology, McGill University, Quebec, Canada
| |
Collapse
|
46
|
Simmons WK, Barsalou LW. THE SIMILARITY-IN-TOPOGRAPHY PRINCIPLE: RECONCILING THEORIES OF CONCEPTUAL DEFICITS. Cogn Neuropsychol 2010; 20:451-86. [PMID: 20957580 DOI: 10.1080/02643290342000032] [Citation(s) in RCA: 180] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
47
|
Wilcox T, Haslup JA, Boas DA. Dissociation of processing of featural and spatiotemporal information in the infant cortex. Neuroimage 2010; 53:1256-63. [PMID: 20603218 DOI: 10.1016/j.neuroimage.2010.06.064] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2010] [Revised: 06/01/2010] [Accepted: 06/24/2010] [Indexed: 11/25/2022] Open
Abstract
A great deal is known about the development of visual object processing capacities and the neural structures that mediate these capacities in the mature observer. In contrast, little is known about the neural structures that mediate these capacities in the infant or how these structures eventually give rise to mature processing. The present research used near-infrared spectroscopy to investigate neural activation in visual, temporal, and parietal cortex during object processing tasks. Infants aged 5-7 months viewed visual events that required processing of the featural (Experiment 1) or the spatiotemporal (Experiment 2) properties of objects. In Experiment 1, different patterns of neural were obtained in temporal cortex in response to shape than color information. In Experiment 2, different patterns of neural activation were obtained in parietal cortex in response to spatiotemporal (speed and path of motion) than featural (shape and color) information. These results suggest a dissociation of processing of featural and spatiotemporal information in the infant cortex and provide evidence for early functional specification of the human brain. The outcome of these studies informs brain-behavior models of cognitive development and lays the foundation for systematic investigation of the functional maturation of object processing systems in the infant brain.
Collapse
Affiliation(s)
- Teresa Wilcox
- Department of Psychology, Texas A&M University, College Station, TX 77843, USA.
| | | | | |
Collapse
|
48
|
Tompa T, Sáry G. A review on the inferior temporal cortex of the macaque. ACTA ACUST UNITED AC 2010; 62:165-82. [PMID: 19853626 DOI: 10.1016/j.brainresrev.2009.10.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2009] [Revised: 10/14/2009] [Accepted: 10/14/2009] [Indexed: 10/20/2022]
|
49
|
Brain mechanisms supporting discrimination of sensory features of pain: a new model. J Neurosci 2010; 29:14924-31. [PMID: 19940188 DOI: 10.1523/jneurosci.5538-08.2009] [Citation(s) in RCA: 91] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Pain can be very intense or only mild, and can be well localized or diffuse. To date, little is known as to how such distinct sensory aspects of noxious stimuli are processed by the human brain. Using functional magnetic resonance imaging and a delayed match-to-sample task, we show that discrimination of pain intensity, a nonspatial aspect of pain, activates a ventrally directed pathway extending bilaterally from the insular cortex to the prefrontal cortex. This activation is distinct from the dorsally directed activation of the posterior parietal cortex and right dorsolateral prefrontal cortex that occurs during spatial discrimination of pain. Both intensity and spatial discrimination tasks activate highly similar aspects of the anterior cingulate cortex, suggesting that this structure contributes to common elements of the discrimination task such as the monitoring of sensory comparisons and response selection. Together, these results provide the foundation for a new model of pain in which bidirectional dorsal and ventral streams preferentially amplify and process distinct sensory features of noxious stimuli according to top-down task demands.
Collapse
|
50
|
Vialatte FB, Maurice M, Dauwels J, Cichocki A. Steady-state visually evoked potentials: focus on essential paradigms and future perspectives. Prog Neurobiol 2009; 90:418-38. [PMID: 19963032 DOI: 10.1016/j.pneurobio.2009.11.005] [Citation(s) in RCA: 556] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2009] [Revised: 11/26/2009] [Accepted: 11/30/2009] [Indexed: 11/26/2022]
Abstract
After 40 years of investigation, steady-state visually evoked potentials (SSVEPs) have been shown to be useful for many paradigms in cognitive (visual attention, binocular rivalry, working memory, and brain rhythms) and clinical neuroscience (aging, neurodegenerative disorders, schizophrenia, ophthalmic pathologies, migraine, autism, depression, anxiety, stress, and epilepsy). Recently, in engineering, SSVEPs found a novel application for SSVEP-driven brain-computer interface (BCI) systems. Although some SSVEP properties are well documented, many questions are still hotly debated. We provide an overview of recent SSVEP studies in neuroscience (using implanted and scalp EEG, fMRI, or PET), with the perspective of modern theories about the visual pathway. We investigate the steady-state evoked activity, its properties, and the mechanisms behind SSVEP generation. Next, we describe the SSVEP-BCI paradigm and review recently developed SSVEP-based BCI systems. Lastly, we outline future research directions related to basic and applied aspects of SSVEPs.
Collapse
Affiliation(s)
- François-Benoît Vialatte
- Riken BSI, Laboratory for Advanced Brain Signal Processing, 2-1 Hirosawa, Wako-Shi, Saitama-Ken 351-0128, Japan.
| | | | | | | |
Collapse
|