1
|
Lin W, Lv C, Liao J, Hu Y, Liu Y, Lin J. Feature versus object in interpreting working memory capacity. NPJ SCIENCE OF LEARNING 2024; 9:67. [PMID: 39548090 PMCID: PMC11568228 DOI: 10.1038/s41539-024-00279-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024]
Abstract
The debate about whether the capacity of working memory (WM) varies with the complexity of memory items continues. This study employed novel experimental materials to investigate the role of complexity in WM capacity. Across seven experiments, we explored the relationship between complexity and WM capacity. The results indicated that the complexity of memory items significantly affects WM capacity. However, given the non-linear relationship between complexity and WM capacity, we propose that WM may not allocate resources directly to each individual item. Instead, it might integrate these items to some extent before storage.
Collapse
Affiliation(s)
- Wuji Lin
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, China
| | - Chenxi Lv
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Jiejie Liao
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, China
| | - Yuan Hu
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, China
| | - Yutong Liu
- Center for Studies of Psychological Application, South China Normal University, Guangzhou, China
| | - Jingyuan Lin
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.
| |
Collapse
|
2
|
Leger KR, Cho I, Valoumas I, Schwartz D, Mair RW, Goh JOS, Gutchess A. Cross-cultural comparison of the neural correlates of true and false memory retrieval. Memory 2024; 32:1323-1340. [PMID: 38266009 PMCID: PMC11266529 DOI: 10.1080/09658211.2024.2307923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 01/02/2024] [Indexed: 01/26/2024]
Abstract
Prior work has shown Americans have higher levels of memory specificity than East Asians. Neuroimaging studies have not investigated mechanisms that account for cultural differences at retrieval. In this study, we use fMRI to assess whether mnemonic discrimination, distinguishing novel from previously encountered stimuli, accounts for cultural differences in memory. Fifty-five American and 55 Taiwanese young adults completed an object recognition paradigm testing discrimination of old targets, similar lures and novel foils. Mnemonic discrimination was tested by comparing discrimination of similar lures from studied targets, and results showed the relationship between activity in right fusiform gyrus and behavioural discrimination between target and lure objects differed across cultural groups. Parametric modulation analyses of activity during lure correct rejections also indicated that groups differed in left superior parietal cortex response to variations in lure similarity. Additional analyses of old vs. new activity indicated that Americans and Taiwanese differ in the neural activity supporting general object recognition in the hippocampus, left inferior frontal gyrus and middle frontal gyrus. Results are juxtaposed against comparisons of the regions activated in common across the two cultures. Overall, Americans and Taiwanese differ in the extent to which they recruit visual processing and attention modulating brain regions.
Collapse
Affiliation(s)
| | - Isu Cho
- Department of Psychology, Brandeis University, Waltham, MA, USA
| | | | | | - Ross W. Mair
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Joshua Oon Soo Goh
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei City, Taiwan
- Department of Psychology, National Taiwan University, Taipei City, Taiwan
- Neurobiology and Cognitive Sciences Center, National Taiwan University, Taipei City, Taiwan
- Center of Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei City, Taiwan
| | - Angela Gutchess
- Department of Psychology, Brandeis University, Waltham, MA, USA
| |
Collapse
|
3
|
Ma Y, Zhang W, Du M, Jing H, Zheng N. Hierarchical Bayesian Causality Network to Extract High-Level Semantic Information in Visual Cortex. Int J Neural Syst 2024; 34:2450002. [PMID: 38084473 DOI: 10.1142/s0129065724500023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Functional MRI (fMRI) is a brain signal with high spatial resolution, and visual cognitive processes and semantic information in the brain can be represented and obtained through fMRI. In this paper, we design single-graphic and matched/unmatched double-graphic visual stimulus experiments and collect 12 subjects' fMRI data to explore the brain's visual perception processes. In the double-graphic stimulus experiment, we focus on the high-level semantic information as "matching", and remove tail-to-tail conjunction by designing a model to screen the matching-related voxels. Then, we perform Bayesian causal learning between fMRI voxels based on the transfer entropy, establish a hierarchical Bayesian causal network (HBcausalNet) of the visual cortex, and use the model for visual stimulus image reconstruction. HBcausalNet achieves an average accuracy of 70.57% and 53.70% in single- and double-graphic stimulus image reconstruction tasks, respectively, higher than HcorrNet and HcasaulNet. The results show that the matching-related voxel screening and causality analysis method in this paper can extract the "matching" information in fMRI, obtain a direct causal relationship between matching information and fMRI, and explore the causal inference process in the brain. It suggests that our model can effectively extract high-level semantic information in brain signals and model effective connections and visual perception processes in the visual cortex of the brain.
Collapse
Affiliation(s)
- Yongqiang Ma
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P. R. China
| | - Wen Zhang
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P. R. China
| | - Ming Du
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P. R. China
| | - Haodong Jing
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P. R. China
| | - Nanning Zheng
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, P. R. China
| |
Collapse
|
4
|
Grootswagers T, Robinson AK, Shatek SM, Carlson TA. Mapping the dynamics of visual feature coding: Insights into perception and integration. PLoS Comput Biol 2024; 20:e1011760. [PMID: 38190390 PMCID: PMC10798643 DOI: 10.1371/journal.pcbi.1011760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/19/2024] [Accepted: 12/13/2023] [Indexed: 01/10/2024] Open
Abstract
The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Amanda K. Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M. Shatek
- School of Psychology, The University of Sydney, Sydney, Australia
| | | |
Collapse
|
5
|
Sanders DMW, Cowell RA. The locus of recognition memory signals in human cortex depends on the complexity of the memory representations. Cereb Cortex 2023; 33:9835-9849. [PMID: 37401000 DOI: 10.1093/cercor/bhad248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/12/2023] [Accepted: 06/14/2023] [Indexed: 07/05/2023] Open
Abstract
According to a "Swiss Army Knife" model of the brain, cognitive functions such as episodic memory and face perception map onto distinct neural substrates. In contrast, representational accounts propose that each brain region is best explained not by which specialized function it performs, but by the type of information it represents with its neural firing. In a functional magnetic resonance imaging study, we asked whether the neural signals supporting recognition memory fall mandatorily within the medial temporal lobes (MTL), traditionally thought the seat of declarative memory, or whether these signals shift within cortex according to the content of the memory. Participants studied objects and scenes that were unique conjunctions of pre-defined visual features. Next, we tested recognition memory in a task that required mnemonic discrimination of both simple features and complex conjunctions. Feature memory signals were strongest in posterior visual regions, declining with anterior progression toward the MTL, while conjunction memory signals followed the opposite pattern. Moreover, feature memory signals correlated with feature memory discrimination performance most strongly in posterior visual regions, whereas conjunction memory signals correlated with conjunction memory discrimination most strongly in anterior sites. Thus, recognition memory signals shifted with changes in memory content, in line with representational accounts.
Collapse
Affiliation(s)
- D Merika W Sanders
- Department of Psychology, Harvard University, Cambridge, MA 02138, United States
| | - Rosemary A Cowell
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO 80309, United States
- Department of Psychology & Neuroscience, University of Colorado Boulder, Boulder, CO 80309, United States
| |
Collapse
|
6
|
The role of ventral stream areas for viewpoint-invariant object recognition. Neuroimage 2022; 251:119021. [DOI: 10.1016/j.neuroimage.2022.119021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 01/16/2022] [Accepted: 02/17/2022] [Indexed: 11/21/2022] Open
|
7
|
Downer JD, Verhein JR, Rapone BC, O'Connor KN, Sutter ML. An Emergent Population Code in Primary Auditory Cortex Supports Selective Attention to Spectral and Temporal Sound Features. J Neurosci 2021; 41:7561-7577. [PMID: 34210783 PMCID: PMC8425978 DOI: 10.1523/jneurosci.0693-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/21/2022] Open
Abstract
Textbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly. In order to illuminate how A1 supports flexible perception in rich acoustic environments, we recorded from A1 neurons while rhesus macaques (one male, one female) performed a feature-selective attention task. We presented sounds that varied along spectral and temporal feature dimensions (carrier bandwidth and temporal envelope, respectively). Within a block, subjects attended to one feature of the sound in a selective change detection task. We found that single neurons tend to be high-dimensional, in that they exhibit substantial mixed selectivity for both sound features, as well as task context. We found no overall enhancement of single-neuron coding of the attended feature, as attention could either diminish or enhance this coding. However, a population-level analysis reveals that ensembles of neurons exhibit enhanced encoding of attended sound features, and this population code tracks subjects' performance. Importantly, surrogate neural populations with intact single-neuron tuning but shuffled higher-order correlations among neurons fail to yield attention- related effects observed in the intact data. These results suggest that an emergent population code not measurable at the single-neuron level might constitute the functional unit of sensory representation in PSC.SIGNIFICANCE STATEMENT The ability to adapt to a dynamic sensory environment promotes a range of important natural behaviors. We recorded from single neurons in monkey primary auditory cortex (A1), while subjects attended to either the spectral or temporal features of complex sounds. Surprisingly, we found no average increase in responsiveness to, or encoding of, the attended feature across single neurons. However, when we pooled the activity of the sampled neurons via targeted dimensionality reduction (TDR), we found enhanced population-level representation of the attended feature and suppression of the distractor feature. This dissociation of the effects of attention at the level of single neurons versus the population highlights the synergistic nature of cortical sound encoding and enriches our understanding of sensory cortical function.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Otolaryngology, Head and Neck Surgery, University of California, San Francisco, California 94143
| | - Jessica R Verhein
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Medicine, Stanford University, Stanford, California 94305
| | - Brittany C Rapone
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Social Sciences, Oxford Brookes University, Oxford, OX4 0BP, United Kingdom
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
8
|
Stimulus variability and task relevance modulate binding-learning. Atten Percept Psychophys 2021; 84:1151-1166. [PMID: 34282562 DOI: 10.3758/s13414-021-02338-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2021] [Indexed: 11/08/2022]
Abstract
Classical theories of attention posit that integration of features into object representation (or feature binding) requires engagement of focused attention. Studies challenging this idea have demonstrated that feature binding can happen outside of the focus of attention for familiar objects, as well as for arbitrary color-orientation conjunctions. Detection performance for arbitrary feature conjunction improves with training, suggesting a potential role of perceptual learning mechanisms in the integration of features, a process called "binding-learning". In the present study, we investigate whether stimulus variability and task relevance, two critical determinants of visual perceptual learning, also modulate binding-learning. Transfer of learning in a visual search task to a pre-exposed color-orientation conjunction was assessed under conditions of varying stimulus variability and task relevance. We found transfer of learning for the pre-exposed feature conjunctions that were trained with high variability (Experiment 1). Transfer of learning was not observed when the conjunction was rendered task-irrelevant during training due to pop-out targets (Experiment 2). Our findings show that feature binding is determined by principles of perceptual learning, and they support the idea that functions traditionally attributed to goal-driven attention can be grounded in the learning of the statistical structure of the environment.
Collapse
|
9
|
Sone H, Kang MS, Li AY, Tsubomi H, Fukuda K. Simultaneous estimation procedure reveals the object-based, but not space-based, dependence of visual working memory representations. Cognition 2021; 209:104579. [PMID: 33406461 DOI: 10.1016/j.cognition.2020.104579] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 12/23/2020] [Accepted: 12/24/2020] [Indexed: 10/22/2022]
Abstract
Visual working memory (VWM) allows us to actively represent a limited amount of visual information in mind. Although its severe capacity limit is widely accepted, researchers disagree on the nature of its representational unit. Object-based theories argue that VWM organizes feature representations into integrated representations, whereas feature-based theories argue that VWM represents visual features independently. Supporting a feature-based account of VWM, recent studies have demonstrated that features comprising an object can be forgotten independently. Although evidence of feature-based forgetting invalidates a pure object-based account of VWM that assumes perfect integration of feature representations, it is possible that feature representations may be organized in a dependent manner on the basis of objects when they exist in memory. Furthermore, many previous studies prompted participants to recall object features independently by sequentially displaying a response probe for each feature (i.e., sequential estimation procedure), and this task demand might have promoted the independence of feature representations in VWM. To test these possibilities, we created a novel task to simultaneously capture the representational quality of two features of the same object (i.e., simultaneous estimation procedure) and tested their dependence across the entire spectrum of representational quality. Here, we found that the quality of feature representations within the same object covaried reliably in both sequential and simultaneous estimation procedures, but this representational dependence was statistically stronger in the simultaneous estimation procedure than in the sequential estimation procedure. Furthermore, we confirmed that neither the shared spatial location nor simultaneous estimation of two features was sufficient to induce representational dependence in VWM. Thus, our results demonstrate that feature representations in VWM are organized in a dependent manner on the basis of objects, but the degree of dependence can vary based on the current task demand.
Collapse
Affiliation(s)
- Hirotaka Sone
- University of Toronto Mississauga, Canada; University of Toyama, Japan
| | - Min-Suk Kang
- Sungkyunkwan University, Republic of Korea; Center for Neuroscience Imaging Research, Republic of Korea.
| | | | | | - Keisuke Fukuda
- University of Toronto Mississauga, Canada; University of Toronto, Canada.
| |
Collapse
|
10
|
Liang JC, Erez J, Zhang F, Cusack R, Barense MD. Experience Transforms Conjunctive Object Representations: Neural Evidence for Unitization After Visual Expertise. Cereb Cortex 2020; 30:2721-2739. [DOI: 10.1093/cercor/bhz250] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Abstract
Certain transformations must occur within the brain to allow rapid processing of familiar experiences. Complex objects are thought to become unitized, whereby multifeature conjunctions are retrieved as rapidly as a single feature. Behavioral studies strongly support unitization theory, but a compelling neural mechanism is lacking. Here, we examined how unitization transforms conjunctive representations to become more “feature-like” by recruiting posterior regions of the ventral visual stream (VVS) whose architecture is specialized for processing single features. We used functional magnetic resonance imaging to scan humans before and after visual training with novel objects. We implemented a novel multivoxel pattern analysis to measure a conjunctive code, which represented a conjunction of object features above and beyond the sum of the parts. Importantly, a multivoxel searchlight showed that the strength of conjunctive coding in posterior VVS increased posttraining. Furthermore, multidimensional scaling revealed representational separation at the level of individual features in parallel to the changes at the level of feature conjunctions. Finally, functional connectivity between anterior and posterior VVS was higher for novel objects than for trained objects, consistent with early involvement of anterior VVS in unitizing feature conjunctions in response to novelty. These data demonstrate that the brain implements unitization as a mechanism to refine complex object representations over the course of multiple learning experiences.
Collapse
Affiliation(s)
- Jackson C Liang
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| | - Jonathan Erez
- Department of Psychology, Brain and Mind Institute, Western Interdisciplinary Research Building, Western University, London, ON N6A 5B7, Canada
| | - Felicia Zhang
- Department of Psychology, Princeton University, Princeton, NJ 08540, USA
| | - Rhodri Cusack
- School of Psychology, Trinity College Dublin, Dublin, Ireland amd
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
- Rotman Research Institute, Toronto, ON M6A 2E1, Canada
| |
Collapse
|
11
|
Kok P, Rait LI, Turk-Browne NB. Content-based Dissociation of Hippocampal Involvement in Prediction. J Cogn Neurosci 2019; 32:527-545. [PMID: 31820676 DOI: 10.1162/jocn_a_01509] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Recent work suggests that a key function of the hippocampus is to predict the future. This is thought to depend on its ability to bind inputs over time and space and to retrieve upcoming or missing inputs based on partial cues. In line with this, previous research has revealed prediction-related signals in the hippocampus for complex visual objects, such as fractals and abstract shapes. Implicit in such accounts is that these computations in the hippocampus reflect domain-general processes that apply across different types and modalities of stimuli. An alternative is that the hippocampus plays a more domain-specific role in predictive processing, with the type of stimuli being predicted determining its involvement. To investigate this, we compared hippocampal responses to auditory cues predicting abstract shapes (Experiment 1) versus oriented gratings (Experiment 2). We measured brain activity in male and female human participants using high-resolution fMRI, in combination with inverted encoding models to reconstruct shape and orientation information. Our results revealed that expectations about shape and orientation evoked distinct representations in the hippocampus. For complex shapes, the hippocampus represented which shape was expected, potentially serving as a source of top-down predictions. In contrast, for simple gratings, the hippocampus represented only unexpected orientations, more reminiscent of a prediction error. We discuss several potential explanations for this content-based dissociation in hippocampal function, concluding that the computational role of the hippocampus in predictive processing may depend on the nature and complexity of stimuli.
Collapse
Affiliation(s)
- Peter Kok
- Yale University.,University College London
| | | | | |
Collapse
|
12
|
Object shape and surface properties are jointly encoded in mid-level ventral visual cortex. Curr Opin Neurobiol 2019; 58:199-208. [PMID: 31586749 DOI: 10.1016/j.conb.2019.09.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 08/30/2019] [Accepted: 09/11/2019] [Indexed: 11/22/2022]
Abstract
Recognizing a myriad visual objects rapidly is a hallmark of the primate visual system. Traditional theories of object recognition have focused on how crucial form features, for example, the orientation of edges, may be extracted in early visual cortex and utilized to recognize objects. An alternative view argues that much of early and mid-level visual processing focuses on encoding surface characteristics, for example, texture. Neurophysiological evidence from primate area V4 supports a third alternative - the joint, but independent, encoding of form and texture - that would be advantageous for segmenting objects from the background in natural scenes and for object recognition that is independent of surface texture. Future studies that leverage deep convolutional network models, especially focusing on network failures to match biology and behavior, can advance our insights into how such a joint representation of form and surface properties might emerge in visual cortex.
Collapse
|
13
|
Cowell RA, Barense MD, Sadil PS. A Roadmap for Understanding Memory: Decomposing Cognitive Processes into Operations and Representations. eNeuro 2019; 6:ENEURO.0122-19.2019. [PMID: 31189554 PMCID: PMC6620388 DOI: 10.1523/eneuro.0122-19.2019] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 06/03/2019] [Accepted: 06/03/2019] [Indexed: 11/21/2022] Open
Abstract
Thanks to patients Phineas Gage and Henry Molaison, we have long known that behavioral control depends on the frontal lobes, whereas declarative memory depends on the medial temporal lobes (MTL). For decades, cognitive functions-behavioral control, declarative memory-have served as labels for characterizing the division of labor in cortex. This approach has made enormous contributions to understanding how the brain enables the mind, providing a systems-level explanation of brain function that constrains lower-level investigations of neural mechanism. Today, the approach has evolved such that functional labels are often applied to brain networks rather than focal brain regions. Furthermore, the labels have diversified to include both broadly-defined cognitive functions (declarative memory, visual perception) and more circumscribed mental processes (recollection, familiarity, priming). We ask whether a process-a high-level mental phenomenon corresponding to an introspectively-identifiable cognitive event-is the most productive label for dissecting memory. For example, recollection conflates a neurocomputational operation (pattern completion-based retrieval) with a class of representational content (associative, high-dimensional memories). Because a full theory of memory must identify operations and representations separately, and specify how they interact, we argue that processes like recollection constitute inadequate labels for characterizing neural mechanisms. Instead, we advocate considering the component operations and representations of processes like recollection in isolation. For the organization of memory, the evidence suggests that pattern completion is recapitulated widely across the ventral visual stream and MTL, but the division of labor between sites within this pathway can be explained by representational content.
Collapse
Affiliation(s)
- Rosemary A Cowell
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 3G3, Canada
| | - Patrick S Sadil
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| |
Collapse
|
14
|
Sadil P, Potter KW, Huber DE, Cowell RA. Connecting the dots without top-down knowledge: Evidence for rapidly-learned low-level associations that are independent of object identity. J Exp Psychol Gen 2019; 148:1058-1070. [PMID: 31070394 PMCID: PMC6759832 DOI: 10.1037/xge0000607] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Knowing the identity of an object can powerfully alter perception. Visual demonstrations of this-such as Gregory's (1970) hidden Dalmatian-affirm the existence of both top-down and bottom-up processing. We consider a third processing pathway: lateral connections between the parts of an object. Lateral associations are assumed by theories of object processing and hierarchical theories of memory, but little evidence attests to them. If they exist, their effects should be observable even in the absence of object identity knowledge. We employed Continuous Flash Suppression (CFS) while participants studied object images, such that visual details were learned without explicit object identification. At test, lateral associations were probed using a part-to-part matching task. We also tested whether part-whole links were facilitated by prior study using a part-naming task, and included another study condition (Word), in which participants saw only an object's written name. The key question was whether CFS study (which provided visual information without identity) would better support part-to-part matching (via lateral associations) whereas Word study (which provided identity without the correct visual form) would better support part-naming (via top-down processing). The predicted dissociation was found and confirmed by state-trace analyses. Thus, lateral part-to-part associations were learned and retrieved independently of object identity representations. This establishes novel links between perception and memory, demonstrating that (a) lateral associations at lower levels of the object identification hierarchy exist and contribute to object processing and (b) these associations are learned via rapid, episodic-like mechanisms previously observed for the high-level, arbitrary relations comprising episodic memories. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Patrick Sadil
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, MA 01003, USA
| | - Kevin W. Potter
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, MA 01003, USA
| | - David E. Huber
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, MA 01003, USA
| | - Rosemary A. Cowell
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, MA 01003, USA
| |
Collapse
|
15
|
Reeder RR, Hanke M, Pollmann S. Task relevance modulates the cortical representation of feature conjunctions in the target template. Sci Rep 2017; 7:4514. [PMID: 28674392 PMCID: PMC5495750 DOI: 10.1038/s41598-017-04123-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Accepted: 05/15/2017] [Indexed: 11/19/2022] Open
Abstract
Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.
Collapse
Affiliation(s)
- Reshanne R Reeder
- Department of Experimental Psychology, Institute of Psychology II, Otto-von-Guericke University, Magdeburg, Germany.
| | - Michael Hanke
- Psychoinformatics Lab, Institute of Psychology II, Otto-von-Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Stefan Pollmann
- Department of Experimental Psychology, Institute of Psychology II, Otto-von-Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|