101
|
Katz JS, Forloines MR, Strassberg LR, Bondy B. Observational drawing in the brain: A longitudinal exploratory fMRI study. Neuropsychologia 2021; 160:107960. [PMID: 34274380 DOI: 10.1016/j.neuropsychologia.2021.107960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 06/29/2021] [Accepted: 07/12/2021] [Indexed: 10/20/2022]
Abstract
Observational drawing involves acquiring a number of basic drawing techniques and concepts. There is limited knowledge on how observational drawing skills are represented by brain responses. Here, we investigate the behavioral and functional changes behind students learning to draw in a longitudinal study on 45 participants by testing art students (n = 26) at the beginning and end of a 16-week observational drawing course compared to a matched group of non-art students (n = 19). Four novel tasks were used that involve making decisions about light sources, tonal value, line variation and linear perspective using task-based 7 T-functional Magnetic Resonance Imaging (fMRI). While exploratory in nature, we expected to find improvement on each task over time and functional changes in the prefrontal cortex and cerebellum for the art students. Art students' performance significantly improved on the light sources, line variation, and linear perspective tasks and functional changes were found for the line variation, linear perspective, and tonal value tasks. Using whole brain analyses diffuse functional changes were discovered including prefrontal cortex areas and cerebellum. Brain areas involved in cognitive processing, including attention, decision making, motor control, top-down control, visual information processing, and working memory all functionally changed with experience. These findings demonstrate some of the first functional changes in the brain due to training in the arts and have implications for pedagogy and mental health.
Collapse
Affiliation(s)
- Jeffrey S Katz
- Department of Psychological Sciences, Auburn University, Auburn, AL, USA; AU MRI Research Center, Department of Electrical & Computer Engineering, Auburn University, Auburn, AL, USA; Alabama Advanced Imaging Consortium, Birmingham, AL, USA; Center for Neuroscience, Auburn University, Auburn, AL, USA.
| | - Martha R Forloines
- Alzheimer's Disease Research Center, Department of Neurology, University of California, Davis, Sacramento, CA, USA
| | - Lily R Strassberg
- Department of Psychological Sciences, Auburn University, Auburn, AL, USA
| | - Barbara Bondy
- Department of Art and Art History, Auburn University, Auburn, AL, USA
| |
Collapse
|
102
|
Persichetti AS, Denning JM, Gotts SJ, Martin A. A Data-Driven Functional Mapping of the Anterior Temporal Lobes. J Neurosci 2021; 41:6038-6049. [PMID: 34083253 PMCID: PMC8276737 DOI: 10.1523/jneurosci.0456-21.2021] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/03/2021] [Accepted: 05/12/2021] [Indexed: 12/14/2022] Open
Abstract
Although the anterior temporal lobe (ATL) comprises several anatomic and functional subdivisions, it is often reduced to a homogeneous theoretical entity, such as a domain-general convergence zone, or "hub," for semantic information. Methodological limitations are largely to blame for the imprecise mapping of function to structure in the ATL. There are two major obstacles to using fMRI to identify the precise functional organization of the ATL: the difficult choice of stimuli and tasks to activate, and dissociate, specific regions within the ATL; and poor signal quality because of magnetic field distortions near the sinuses. To circumvent these difficulties, we developed a data-driven parcellation routine using resting-state fMRI data (24 females, 64 males) acquired using a sequence that was optimized to enhance signal in the ATL. Focusing on patterns of functional connectivity between each ATL voxel and the rest of the brain, we found that the ATL comprises at least 34 distinct functional parcels that are arranged into bands along the lateral and ventral cortical surfaces, extending from the posterior temporal lobes into the temporal poles. In addition, the anterior region of the fusiform gyrus, most often cited as the location of the semantic hub, was found to be part of a domain-specific network associated with face and social processing, rather than a domain-general semantic hub. These findings offer a fine-grained functional map of the ATL and offer an initial step toward using more precise language to describe the locations of functional responses in this heterogeneous region of human cortex.SIGNIFICANCE STATEMENT The functional role of the anterior aspects of the temporal lobes (ATL) is a contentious issue. While it is likely that different regions within the ATL subserve unique cognitive functions, most studies revert to vaguely referring to particular functional regions as "the ATL," and, thus, the mapping of function to anatomy remains unclear. We used resting-state fMRI connectivity patterns between the ATL and the rest of the brain to reveal that the ATL comprises at least 34 distinct functional parcels that are organized into a three-level functional hierarchy. These results provide a detailed functional map of the anterior temporal lobes that can guide future research on how distinct regions within the ATL support diverse cognitive functions.
Collapse
Affiliation(s)
- Andrew S Persichetti
- Section on Cognitive Neuropsychology, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892
| | - Joseph M Denning
- Section on Cognitive Neuropsychology, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892
| | - Stephen J Gotts
- Section on Cognitive Neuropsychology, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892
| | - Alex Martin
- Section on Cognitive Neuropsychology, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892
| |
Collapse
|
103
|
Yang J, Huber L, Yu Y, Bandettini PA. Linking cortical circuit models to human cognition with laminar fMRI. Neurosci Biobehav Rev 2021; 128:467-478. [PMID: 34245758 DOI: 10.1016/j.neubiorev.2021.07.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 06/29/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Laboratory animal research has provided significant knowledge into the function of cortical circuits at the laminar level, which has yet to be fully leveraged towards insights about human brain function on a similar spatiotemporal scale. The use of functional magnetic resonance imaging (fMRI) in conjunction with neural models provides new opportunities to gain important insights from current knowledge. During the last five years, human studies have demonstrated the value of high-resolution fMRI to study laminar-specific activity in the human brain. This is mostly performed at ultra-high-field strengths (≥ 7 T) and is known as laminar fMRI. Advancements in laminar fMRI are beginning to open new possibilities for studying questions in basic cognitive neuroscience. In this paper, we first review recent methodological advances in laminar fMRI and describe recent human laminar fMRI studies. Then, we discuss how the use of laminar fMRI can help bridge the gap between cortical circuit models and human cognition.
Collapse
Affiliation(s)
- Jiajia Yang
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan; Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA.
| | - Laurentius Huber
- MR-Methods Group, Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, the Netherlands
| | - Yinghua Yu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan; Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA
| | - Peter A Bandettini
- Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA; Functional MRI Core Facility, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
104
|
Bonner MF, Epstein RA. Object representations in the human brain reflect the co-occurrence statistics of vision and language. Nat Commun 2021; 12:4081. [PMID: 34215754 PMCID: PMC8253839 DOI: 10.1038/s41467-021-24368-2] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/09/2021] [Indexed: 11/17/2022] Open
Abstract
A central regularity of visual perception is the co-occurrence of objects in the natural environment. Here we use machine learning and fMRI to test the hypothesis that object co-occurrence statistics are encoded in the human visual system and elicited by the perception of individual objects. We identified low-dimensional representations that capture the latent statistical structure of object co-occurrence in real-world scenes, and we mapped these statistical representations onto voxel-wise fMRI responses during object viewing. We found that cortical responses to single objects were predicted by the statistical ensembles in which they typically occur, and that this link between objects and their visual contexts was made most strongly in parahippocampal cortex, overlapping with the anterior portion of scene-selective parahippocampal place area. In contrast, a language-based statistical model of the co-occurrence of object names in written text predicted responses in neighboring regions of object-selective visual cortex. Together, these findings show that the sensory coding of objects in the human brain reflects the latent statistics of object context in visual and linguistic experience.
Collapse
Affiliation(s)
- Michael F Bonner
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA.
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
| | - Russell A Epstein
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
105
|
Li J, Zhang R, Liu S, Liang Q, Zheng S, He X, Huang R. Human spatial navigation: Neural representations of spatial scales and reference frames obtained from an ALE meta-analysis. Neuroimage 2021; 238:118264. [PMID: 34129948 DOI: 10.1016/j.neuroimage.2021.118264] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 05/26/2021] [Accepted: 05/27/2021] [Indexed: 11/16/2022] Open
Abstract
Humans use different spatial reference frames (allocentric or egocentric) to navigate successfully toward their destination in different spatial scale spaces (environmental or vista). However, it remains unclear how the brain represents different spatial scales and different spatial reference frames. Thus, we conducted an activation likelihood estimation (ALE) meta-analysis of 47 fMRI articles involving human spatial navigation. We found that both the environmental and vista spaces activated the parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area in the right hemisphere. The environmental space showed stronger activation than the vista space in the occipital and frontal regions. No brain region exhibited stronger activation for the vista than the environmental space. The allocentric and egocentric reference frames activated the bilateral PPA and right RSC. The allocentric frame showed more stronger activations than the egocentric frame in the right culmen, left middle frontal gyrus, and precuneus. No brain region displayed stronger activation for the egocentric than the allocentric navigation. Our findings suggest that navigation in different spatial scale spaces can evoke specific and common brain regions, and that the brain regions representing spatial reference frames are not absolutely separated.
Collapse
Affiliation(s)
- Jinhui Li
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Ruibin Zhang
- Department of Psychology, School of Public Health, Southern Medical University (Guangdong Provincial Key Laboratory of Tropical Disease Research), Guangzhou, China; Department of Psychiatry, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Siqi Liu
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Qunjun Liang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Senning Zheng
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Xianyou He
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, 510631, China
| | - Ruiwang Huang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong, 510631, China.
| |
Collapse
|
106
|
Steel A, Billings MM, Silson EH, Robertson CE. A network linking scene perception and spatial memory systems in posterior cerebral cortex. Nat Commun 2021; 12:2632. [PMID: 33976141 PMCID: PMC8113503 DOI: 10.1038/s41467-021-22848-z] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 04/05/2021] [Indexed: 02/03/2023] Open
Abstract
The neural systems supporting scene-perception and spatial-memory systems of the human brain are well-described. But how do these neural systems interact? Here, using fine-grained individual-subject fMRI, we report three cortical areas of the human brain, each lying immediately anterior to a region of the scene perception network in posterior cerebral cortex, that selectively activate when recalling familiar real-world locations. Despite their close proximity to the scene-perception areas, network analyses show that these regions constitute a distinct functional network that interfaces with spatial memory systems during naturalistic scene understanding. These "place-memory areas" offer a new framework for understanding how the brain implements memory-guided visual behaviors, including navigation.
Collapse
Affiliation(s)
- Adam Steel
- grid.254880.30000 0001 2179 2404Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Madeleine M. Billings
- grid.254880.30000 0001 2179 2404Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Edward H. Silson
- grid.4305.20000 0004 1936 7988Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, EH8 9JZ UK
| | - Caroline E. Robertson
- grid.254880.30000 0001 2179 2404Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
107
|
The parahippocampal place area and hippocampus encode the spatial significance of landmark objects. Neuroimage 2021; 236:118081. [PMID: 33882351 DOI: 10.1016/j.neuroimage.2021.118081] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 03/13/2021] [Accepted: 04/12/2021] [Indexed: 11/23/2022] Open
Abstract
Landmark objects are points of reference that can anchor one's internal cognitive map to the external world while navigating. They are especially useful in indoor environments where other cues such as spatial geometries are often similar across locations. We used functional magnetic resonance imaging (fMRI) and multivariate pattern analysis (MVPA) to understand how the spatial significance of landmark objects is represented in the human brain. Participants learned the spatial layout of a virtual building with arbitrary objects as unique landmarks in each room during a navigation task. They were scanned while viewing the objects before and after learning. MVPA revealed that the neural representation of landmark objects in the right parahippocampal place area (rPPA) and the hippocampus transformed systematically according to their locations. Specifically, objects in different rooms became more distinguishable than objects in the same room. These results demonstrate that rPPA and the hippocampus encode the spatial significance of landmark objects in indoor spaces.
Collapse
|
108
|
Rolls ET. Neurons including hippocampal spatial view cells, and navigation in primates including humans. Hippocampus 2021; 31:593-611. [PMID: 33760309 DOI: 10.1002/hipo.23324] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 03/01/2021] [Accepted: 03/13/2021] [Indexed: 01/11/2023]
Abstract
A new theory is proposed of mechanisms of navigation in primates including humans in which spatial view cells found in the primate hippocampus and parahippocampal gyrus are used to guide the individual from landmark to landmark. The navigation involves approach to each landmark in turn (taxis), using spatial view cells to identify the next landmark in the sequence, and does not require a topological map. Two other cell types found in primates, whole body motion cells, and head direction cells, can be utilized in the spatial view cell navigational mechanism, but are not essential. If the landmarks become obscured, then the spatial view representations can be updated by self-motion (idiothetic) path integration using spatial coordinate transform mechanisms in the primate dorsal visual system to transform from egocentric to allocentric spatial view coordinates. A continuous attractor network or time cells or working memory is used in this approach to navigation to encode and recall the spatial view sequences involved. I also propose how navigation can be performed using a further type of neuron found in primates, allocentric-bearing-to-a-landmark neurons, in which changes of direction are made when a landmark reaches a particular allocentric bearing. This is useful if a landmark cannot be approached. The theories are made explicit in models of navigation, which are then illustrated by computer simulations. These types of navigation are contrasted with triangulation, which requires a topological map. It is proposed that the first strategy utilizing spatial view cells is used frequently in humans, and is relatively simple because primates have spatial view neurons that respond allocentrically to locations in spatial scenes. An advantage of this approach to navigation is that hippocampal spatial view neurons are also useful for episodic memory, and for imagery.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.,Department of Computer Science, University of Warwick, Coventry, UK
| |
Collapse
|
109
|
Interdependent self-construal predicts increased gray matter volume of scene processing regions in the brain. Biol Psychol 2021; 161:108050. [PMID: 33592270 DOI: 10.1016/j.biopsycho.2021.108050] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 12/21/2022]
Abstract
Interdependent self-construal (SC) is thought to lead to a more holistic cognitive style that emphasizes the processing of the background scene of a focal object. At present, little is known about whether the structural properties of the brain might underlie this functional relationship. Here, we examined the gray matter (GM) volume of three cortical regions involved in scene processing -- a cornerstone of contextual processing. Study 1 tested 78 European American non-student adults and found that interdependent (vs. independent) SC predicts higher GM volume in the parahippocampal place area (PPA), one of the three target regions. Testing both European American and East Asian college students (total N = 126), Study 2 replicated this association. Moreover, the GM volume of all the three target regions was greater for East Asians than for European Americans. Our findings suggest that there is a structural neural underpinning for the cultural variation in cognitive style.
Collapse
|
110
|
Abstract
Rapid visual perception is often viewed as a bottom-up process. Category-preferred neural regions are often characterized as automatic, default processing mechanisms for visual inputs of their categorical preference. To explore the sensitivity of such regions to top-down information, we examined three scene-preferring brain regions, the occipital place area (OPA), the parahippocampal place area (PPA), and the retrosplenial complex (RSC) and tested whether the processing of outdoor scenes is influenced by the functional contexts in which they are seen. Context was manipulated by presenting real-world landscape images as if being viewed through a window or within a picture frame-manipulations that do not affect scene content but do affect one's functional knowledge regarding the scene. This manipulation influences neural scene processing (as measured by fMRI): The OPA and the PPA exhibited greater neural activity when participants viewed images as if through a window as compared with within a picture frame, whereas the RSC did not show this difference. In a separate behavioral experiment, functional context affected scene memory in predictable directions (boundary extension). Our interpretation is that the window context denotes three dimensionality, therefore rendering the perceptual experience of viewing landscapes as more realistic. Conversely, the frame context denotes a 2-D image. As such, more spatially biased scene representations in the OPA and the PPA are influenced by differences in top-down, perceptual expectations generated from context. In contrast, more semantically biased scene representations in the RSC are likely to be less affected by top-down signals that carry information about the physical layout of a scene.
Collapse
|
111
|
Srokova S, Hill PF, Elward RL, Rugg MD. Effects of age on goal-dependent modulation of episodic memory retrieval. Neurobiol Aging 2021; 102:73-88. [PMID: 33765433 DOI: 10.1016/j.neurobiolaging.2021.02.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 02/02/2021] [Accepted: 02/03/2021] [Indexed: 10/22/2022]
Abstract
Retrieval gating refers to the ability to modulate the retrieval of features of a single memory episode according to behavioral goals. Recent findings demonstrate that younger adults engage retrieval gating by attenuating the representation of task-irrelevant features of an episode. Here, we examine whether retrieval gating varies with age. Younger and older adults incidentally encoded words superimposed over scenes or scrambled backgrounds that were displayed in one of three spatial locations. Participants subsequently underwent fMRI as they completed two memory tasks: the background task, which tested memory for the word's background, and the location task, testing memory for the word's location. Employing univariate and multivariate approaches, we demonstrated that younger, but not older adults, exhibited attenuated reinstatement of scene information when it was goal-irrelevant (during the location task). Additionally, in younger adults only, the strength of scene reinstatement in the parahippocampal place area during the background task was related to item and source memory performance. Together, these findings point to an age-related decline in the ability to engage retrieval gating.
Collapse
Affiliation(s)
- Sabina Srokova
- Center for Vital Longevity, University of Texas at Dallas, Dallas, TX, USA; School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA.
| | - Paul F Hill
- Center for Vital Longevity, University of Texas at Dallas, Dallas, TX, USA; School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
| | - Rachael L Elward
- School of Applied Sciences, Division of Psychology, London South Bank University, London, UK
| | - Michael D Rugg
- Center for Vital Longevity, University of Texas at Dallas, Dallas, TX, USA; School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA; School of Psychology, University of East Anglia, Norwich, UK; Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
112
|
Cox CR, Rogers TT. Finding Distributed Needles in Neural Haystacks. J Neurosci 2021; 41:1019-1032. [PMID: 33334868 PMCID: PMC7880292 DOI: 10.1523/jneurosci.0904-20.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 12/02/2020] [Accepted: 12/04/2020] [Indexed: 11/21/2022] Open
Abstract
The human cortex encodes information in complex networks that can be anatomically dispersed and variable in their microstructure across individuals. Using simulations with neural network models, we show that contemporary statistical methods for functional brain imaging-including univariate contrast, searchlight multivariate pattern classification, and whole-brain decoding with L1 or L2 regularization-each have critical and complementary blind spots under these conditions. We then introduce the sparse-overlapping-sets (SOS) LASSO-a whole-brain multivariate approach that exploits structured sparsity to find network-distributed information-and show in simulation that it captures the advantages of other approaches while avoiding their limitations. When applied to fMRI data to find neural responses that discriminate visually presented faces from other visual stimuli, each method yields a different result, but existing approaches all support the canonical view that face perception engages localized areas in posterior occipital and temporal regions. In contrast, SOS LASSO uncovers a network spanning all four lobes of the brain. The result cannot reflect spurious selection of out-of-system areas because decoding accuracy remains exceedingly high even when canonical face and place systems are removed from the dataset. When used to discriminate visual scenes from other stimuli, the same approach reveals a localized signal consistent with other methods-illustrating that SOS LASSO can detect both widely distributed and localized representational structure. Thus, structured sparsity can provide an unbiased method for testing claims of functional localization. For faces and possibly other domains, such decoding may reveal representations more widely distributed than previously suspected.SIGNIFICANCE STATEMENT Brain systems represent information as patterns of activation over neural populations connected in networks that can be widely distributed anatomically, variable across individuals, and intermingled with other networks. We show that four widespread statistical approaches to functional brain imaging have critical blind spots in this scenario and use simulations with neural network models to illustrate why. We then introduce a new approach designed specifically to find radically distributed representations in neural networks. In simulation and in fMRI data collected in the well studied domain of face perception, the new approach discovers extensive signal missed by the other methods-suggesting that prior functional imaging work may have significantly underestimated the degree to which neurocognitive representations are distributed and variable across individuals.
Collapse
Affiliation(s)
- Christopher R Cox
- Department of Psychology, Louisiana State University, Baton Rouge, Louisiana 70803
| | - Timothy T Rogers
- Department of Psychology, University of Wisconsin, Madison, Wisconsin 53706
| |
Collapse
|
113
|
The brain dynamics of architectural affordances during transition. Sci Rep 2021; 11:2796. [PMID: 33531612 PMCID: PMC7854617 DOI: 10.1038/s41598-021-82504-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 01/20/2021] [Indexed: 01/30/2023] Open
Abstract
Action is a medium of collecting sensory information about the environment, which in turn is shaped by architectural affordances. Affordances characterize the fit between the physical structure of the body and capacities for movement and interaction with the environment, thus relying on sensorimotor processes associated with exploring the surroundings. Central to sensorimotor brain dynamics, the attentional mechanisms directing the gating function of sensory signals share neuronal resources with motor-related processes necessary to inferring the external causes of sensory signals. Such a predictive coding approach suggests that sensorimotor dynamics are sensitive to architectural affordances that support or suppress specific kinds of actions for an individual. However, how architectural affordances relate to the attentional mechanisms underlying the gating function for sensory signals remains unknown. Here we demonstrate that event-related desynchronization of alpha-band oscillations in parieto-occipital and medio-temporal regions covary with the architectural affordances. Source-level time-frequency analysis of data recorded in a motor-priming Mobile Brain/Body Imaging experiment revealed strong event-related desynchronization of the alpha band to originate from the posterior cingulate complex, the parahippocampal region as well as the occipital cortex. Our results firstly contribute to the understanding of how the brain resolves architectural affordances relevant to behaviour. Second, our results indicate that the alpha-band originating from the occipital cortex and parahippocampal region covaries with the architectural affordances before participants interact with the environment, whereas during the interaction, the posterior cingulate cortex and motor areas dynamically reflect the affordable behaviour. We conclude that the sensorimotor dynamics reflect behaviour-relevant features in the designed environment.
Collapse
|
114
|
Myelin development in visual scene-network tracts beyond late childhood: A multimethod neuroimaging study. Cortex 2021; 137:18-34. [PMID: 33588130 DOI: 10.1016/j.cortex.2020.12.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 07/30/2020] [Accepted: 12/14/2020] [Indexed: 12/15/2022]
Abstract
The visual scene-network-comprising the parahippocampal place area (PPA), retrosplenial cortex (RSC), and occipital place area (OPA)-shows a prolonged functional development. Structural development of white matter that underlies the scene-network has not been investigated despite its potential influence on scene-network function. The key factor for white matter maturation is myelination. However, research on myelination using the gold standard method of post-mortem histology is scarce. In vivo alternatives diffusion-weighted imaging (DWI) and myelin water imaging (MWI) so far report broad-scale findings that prohibit inferences concerning the scene-network. Here, we combine MWI, DWI tractography, and fMRI to investigate myelination in scene-network tracts in middle childhood, late childhood, and adulthood. We report increasing myelin from middle childhood to adulthood in right PPA-OPA, and trends towards increases in the left and right RSC-OPA tracts. Investigating tracts to regions highly connected with the scene-network, such as early visual cortex and the hippocampus, did not yield any significant age group differences. Our findings indicate that structural development coincides with functional development in the scene-network, possibly enabling structure-function interactions.
Collapse
|
115
|
Yen C, Chiang MC. Examining the effect of online advertisement cues on human responses using eye-tracking, EEG, and MRI. Behav Brain Res 2021; 402:113128. [PMID: 33460680 DOI: 10.1016/j.bbr.2021.113128] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 12/07/2020] [Accepted: 01/04/2021] [Indexed: 11/29/2022]
Abstract
This study sought to emphasize how disciplines such as neuroscience and marketing can be applied in advertising and consumer behavior. The application of neuroscience methods in analyzing and understanding human behavior related to the Elaboration Likelihood Model (ELM) and brain activity has recently garnered attention. This study examines brain processes while participants attempted to elicit preferences for a product, and demonstrates factors that influence consumer behavior using eye-tracking, electroencephalography (EEG), and magnetic resonance imaging (MRI) from a neuroscience approach. We planned two conditions of online advertising, namely, peripheral cues without argument and central cues with argument strength. Thirty respondents participated in the experiment, consisting of eye-tracking, EEG, and MRI instruments to explore brain activity in central cue conditions. We investigated whether diffusion tensor imaging (DTI) analysis could detect regional brain changes. Using eye-tracking, we found that the responses were mainly in the mean fixation duration, number of fixations, mean saccade duration, and number of saccade durations for the central cue condition. Moreover, the findings show that the fusiform gyrus and frontal cortex are significantly associated with building a relationship by inferring central cues in the EEG assay. The MRI images show that the fusiform gyrus and frontal cortex are significantly active in the central cue condition. DTI analysis indicates that the corpus callosum has changed in the central cue condition. We used eye-tracking, EEG, MRI, and DTI to understand that these connections may apprehend responses when viewing advertisements, especially in the fusiform gyrus, frontal cortex, and corpus callosum.
Collapse
Affiliation(s)
- Chiahui Yen
- Department of International Business, Ming Chuan University, Taipei 111, Taiwan
| | - Ming-Chang Chiang
- Department of Life Science, College of Science and Engineering, Fu Jen Catholic University, New Taipei City 242, Taiwan.
| |
Collapse
|
116
|
Lee SM, Jin SW, Park SB, Park EH, Lee CH, Lee HW, Lim HY, Yoo SW, Ahn JR, Shin J, Lee SA, Lee I. Goal-directed interaction of stimulus and task demand in the parahippocampal region. Hippocampus 2021; 31:717-736. [PMID: 33394547 PMCID: PMC8359334 DOI: 10.1002/hipo.23295] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 12/05/2020] [Accepted: 12/12/2020] [Indexed: 11/10/2022]
Abstract
The hippocampus and parahippocampal region are essential for representing episodic memories involving various spatial locations and objects, and for using those memories for future adaptive behavior. The “dual‐stream model” was initially formulated based on anatomical characteristics of the medial temporal lobe, dividing the parahippocampal region into two streams that separately process and relay spatial and nonspatial information to the hippocampus. Despite its significance, the dual‐stream model in its original form cannot explain recent experimental results, and many researchers have recognized the need for a modification of the model. Here, we argue that dividing the parahippocampal region into spatial and nonspatial streams a priori may be too simplistic, particularly in light of ambiguous situations in which a sensory cue alone (e.g., visual scene) may not allow such a definitive categorization. Upon reviewing evidence, including our own, that reveals the importance of goal‐directed behavioral responses in determining the relative involvement of the parahippocampal processing streams, we propose the Goal‐directed Interaction of Stimulus and Task‐demand (GIST) model. In the GIST model, input stimuli such as visual scenes and objects are first processed by both the postrhinal and perirhinal cortices—the postrhinal cortex more heavily involved with visual scenes and perirhinal cortex with objects—with relatively little dependence on behavioral task demand. However, once perceptual ambiguities are resolved and the scenes and objects are identified and recognized, the information is then processed through the medial or lateral entorhinal cortex, depending on whether it is used to fulfill navigational or non‐navigational goals, respectively. As complex sensory stimuli are utilized for both navigational and non‐navigational purposes in an intermixed fashion in naturalistic settings, the hippocampus may be required to then put together these experiences into a coherent map to allow flexible cognitive operations for adaptive behavior to occur.
Collapse
Affiliation(s)
- Su-Min Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Seung-Woo Jin
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Seong-Beom Park
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Eun-Hye Park
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Choong-Hee Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Hyun-Woo Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Heung-Yeol Lim
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Seung-Woo Yoo
- Department of Biomedical Science, Charles E. Schmidt College of Medicine, Brain Institute, Florida Atlantic University, Jupiter, Florida, USA
| | - Jae Rong Ahn
- Department of Biology, Tufts University, Medford, Massachusetts, USA
| | - Jhoseph Shin
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| | - Sang Ah Lee
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Inah Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, South Korea
| |
Collapse
|
117
|
Peer M, Brunec IK, Newcombe NS, Epstein RA. Structuring Knowledge with Cognitive Maps and Cognitive Graphs. Trends Cogn Sci 2021; 25:37-54. [PMID: 33248898 PMCID: PMC7746605 DOI: 10.1016/j.tics.2020.10.004] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 10/16/2020] [Accepted: 10/17/2020] [Indexed: 12/21/2022]
Abstract
Humans and animals use mental representations of the spatial structure of the world to navigate. The classical view is that these representations take the form of Euclidean cognitive maps, but alternative theories suggest that they are cognitive graphs consisting of locations connected by paths. We review evidence suggesting that both map-like and graph-like representations exist in the mind/brain that rely on partially overlapping neural systems. Maps and graphs can operate simultaneously or separately, and they may be applied to both spatial and nonspatial knowledge. By providing structural frameworks for complex information, cognitive maps and cognitive graphs may provide fundamental organizing schemata that allow us to navigate in physical, social, and conceptual spaces.
Collapse
Affiliation(s)
- Michael Peer
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Iva K Brunec
- Department of Psychology, Temple University, Philadelphia, PA 19122, USA
| | - Nora S Newcombe
- Department of Psychology, Temple University, Philadelphia, PA 19122, USA
| | - Russell A Epstein
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
118
|
Bainbridge WA, Hall EH, Baker CI. Distinct Representational Structure and Localization for Visual Encoding and Recall during Visual Imagery. Cereb Cortex 2020; 31:1898-1913. [PMID: 33285563 DOI: 10.1093/cercor/bhaa329] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 10/12/2020] [Accepted: 10/12/2020] [Indexed: 01/03/2023] Open
Abstract
During memory recall and visual imagery, reinstatement is thought to occur as an echoing of the neural patterns during encoding. However, the precise information in these recall traces is relatively unknown, with previous work primarily investigating either broad distinctions or specific images, rarely bridging these levels of information. Using ultra-high-field (7T) functional magnetic resonance imaging with an item-based visual recall task, we conducted an in-depth comparison of encoding and recall along a spectrum of granularity, from coarse (scenes, objects) to mid (e.g., natural, manmade scenes) to fine (e.g., living room, cupcake) levels. In the scanner, participants viewed a trial-unique item, and after a distractor task, visually imagined the initial item. During encoding, we observed decodable information at all levels of granularity in category-selective visual cortex. In contrast, information during recall was primarily at the coarse level with fine-level information in some areas; there was no evidence of mid-level information. A closer look revealed segregation between voxels showing the strongest effects during encoding and those during recall, and peaks of encoding-recall similarity extended anterior to category-selective cortex. Collectively, these results suggest visual recall is not merely a reactivation of encoding patterns, displaying a different representational structure and localization from encoding, despite some overlap.
Collapse
Affiliation(s)
- Wilma A Bainbridge
- Department of Psychology, University of Chicago, Chicago, IL 60637, USA.,Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - Elizabeth H Hall
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20814, USA.,Department of Psychology, University of California Davis, Davis, CA 95616, USA.,Center for Mind and Brain, University of California Davis, Davis, CA 95618, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20814, USA
| |
Collapse
|
119
|
Causal Evidence for a Double Dissociation between Object- and Scene-Selective Regions of Visual Cortex: A Preregistered TMS Replication Study. J Neurosci 2020; 41:751-756. [PMID: 33262244 DOI: 10.1523/jneurosci.2162-20.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/23/2020] [Accepted: 10/26/2020] [Indexed: 12/15/2022] Open
Abstract
Natural scenes are characterized by individual objects as well as by global scene properties such as spatial layout. Functional neuroimaging research has shown that this distinction between object and scene processing is one of the main organizing principles of human high-level visual cortex. For example, object-selective regions, including the lateral occipital complex (LOC), were shown to represent object content (but not scene layout), while scene-selective regions, including the occipital place area (OPA), were shown to represent scene layout (but not object content). Causal evidence for a double dissociation between LOC and OPA in representing objects and scenes is currently limited, however. One TMS experiment, conducted in a relatively small sample (N = 13), reported an interaction between LOC and OPA stimulation and object and scene recognition performance (Dilks et al., 2013). Here, we present a high-powered preregistered replication of this study (N = 72, including male and female human participants), using group-average fMRI coordinates to target LOC and OPA. Results revealed unambiguous evidence for a double dissociation between LOC and OPA: relative to vertex stimulation, TMS over LOC selectively impaired the recognition of objects, while TMS over OPA selectively impaired the recognition of scenes. Furthermore, we found that these effects were stable over time and consistent across individual objects and scenes. These results show that LOC and OPA can be reliably and selectively targeted with TMS, even when defined based on group-average fMRI coordinates. More generally, they support the distinction between object and scene processing as an organizing principle of human high-level visual cortex.SIGNIFICANCE STATEMENT Our daily-life environments are characterized both by individual objects and by global scene properties. The distinction between object and scene processing features prominently in visual cognitive neuroscience, with fMRI studies showing that this distinction is one of the main organizing principles of human high-level visual cortex. However, causal evidence for the selective involvement of object- and scene-selective regions in processing their preferred category is less conclusive. Here, testing a large sample (N = 72) using an established paradigm and a preregistered protocol, we found that TMS over object-selective cortex (lateral occipital complex) selectively impaired object recognition, while TMS over scene-selective cortex (occipital place area) selectively impaired scene recognition. These results provide strong causal evidence for the distinction between object and scene processing in human visual cortex.
Collapse
|
120
|
Koch GE, Akpan E, Coutanche MN. Image memorability is predicted by discriminability and similarity in different stages of a convolutional neural network. Learn Mem 2020; 27:503-509. [PMID: 33199475 PMCID: PMC7670863 DOI: 10.1101/lm.051649.120] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 09/24/2020] [Indexed: 11/25/2022]
Abstract
The features of an image can be represented at multiple levels-from its low-level visual properties to high-level meaning. What drives some images to be memorable while others are forgettable? We address this question across two behavioral experiments. In the first, different layers of a convolutional neural network (CNN), which represent progressively higher levels of features, were used to select the images that would be shown to 100 participants through a form of prospective assignment. Here, the discriminability/similarity of an image with others, according to different CNN layers dictated the images presented to different groups, who made a simple indoor versus outdoor judgment for each scene. We found that participants remember more scene images that were selected based on their low-level discriminability or high-level similarity. A second experiment replicated these results in an independent sample of 50 participants, with a different order of postencoding tasks. Together, these experiments provide evidence that both discriminability and similarity, at different visual levels, predict image memorability.
Collapse
Affiliation(s)
- Griffin E Koch
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania 15260, USA
| | - Essang Akpan
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| | - Marc N Coutanche
- Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania 15260, USA
- Brain Institute, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA
| |
Collapse
|
121
|
Vlcek K, Fajnerova I, Nekovarova T, Hejtmanek L, Janca R, Jezdik P, Kalina A, Tomasek M, Krsek P, Hammer J, Marusic P. Mapping the Scene and Object Processing Networks by Intracranial EEG. Front Hum Neurosci 2020; 14:561399. [PMID: 33192393 PMCID: PMC7581859 DOI: 10.3389/fnhum.2020.561399] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 09/02/2020] [Indexed: 11/13/2022] Open
Abstract
Human perception and cognition are based predominantly on visual information processing. Much of the information regarding neuronal correlates of visual processing has been derived from functional imaging studies, which have identified a variety of brain areas contributing to visual analysis, recognition, and processing of objects and scenes. However, only two of these areas, namely the parahippocampal place area (PPA) and the lateral occipital complex (LOC), were verified and further characterized by intracranial electroencephalogram (iEEG). iEEG is a unique measurement technique that samples a local neuronal population with high temporal and anatomical resolution. In the present study, we aimed to expand on previous reports and examine brain activity for selectivity of scenes and objects in the broadband high-gamma frequency range (50–150 Hz). We collected iEEG data from 27 epileptic patients while they watched a series of images, containing objects and scenes, and we identified 375 bipolar channels responding to at least one of these two categories. Using K-means clustering, we delineated their brain localization. In addition to the two areas described previously, we detected significant responses in two other scene-selective areas, not yet reported by any electrophysiological studies; namely the occipital place area (OPA) and the retrosplenial complex. Moreover, using iEEG we revealed a much broader network underlying visual processing than that described to date, using specialized functional imaging experimental designs. Here, we report the selective brain areas for scene processing include the posterior collateral sulcus and the anterior temporal region, which were already shown to be related to scene novelty and landmark naming. The object-selective responses appeared in the parietal, frontal, and temporal regions connected with tool use and object recognition. The temporal analyses specified the time course of the category selectivity through the dorsal and ventral visual streams. The receiver operating characteristic analyses identified the PPA and the fusiform portion of the LOC as being the most selective for scenes and objects, respectively. Our findings represent a valuable overview of visual processing selectivity for scenes and objects based on iEEG analyses and thus, contribute to a better understanding of visual processing in the human brain.
Collapse
Affiliation(s)
- Kamil Vlcek
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia
| | - Iveta Fajnerova
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia.,National Institute of Mental Health, Prague, Czechia
| | - Tereza Nekovarova
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia.,National Institute of Mental Health, Prague, Czechia
| | - Lukas Hejtmanek
- Department of Neurophysiology of Memory, Institute of Physiology, Czech Academy of Sciences, Prague, Czechia
| | - Radek Janca
- Department of Circuit Theory, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czechia
| | - Petr Jezdik
- Department of Circuit Theory, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czechia
| | - Adam Kalina
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Martin Tomasek
- Department of Neurosurgery, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Pavel Krsek
- Department of Paediatric Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Jiri Hammer
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| | - Petr Marusic
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia
| |
Collapse
|
122
|
Monk AM, Barnes GR, Maguire EA. The Effect of Object Type on Building Scene Imagery-an MEG Study. Front Hum Neurosci 2020; 14:592175. [PMID: 33240069 PMCID: PMC7683518 DOI: 10.3389/fnhum.2020.592175] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 10/09/2020] [Indexed: 12/28/2022] Open
Abstract
Previous studies have reported that some objects evoke a sense of local three-dimensional space (space-defining; SD), while others do not (space-ambiguous; SA), despite being imagined or viewed in isolation devoid of a background context. Moreover, people show a strong preference for SD objects when given a choice of objects with which to mentally construct scene imagery. When deconstructing scenes, people retain significantly more SD objects than SA objects. It, therefore, seems that SD objects might enjoy a privileged role in scene construction. In the current study, we leveraged the high temporal resolution of magnetoencephalography (MEG) to compare the neural responses to SD and SA objects while they were being used to build imagined scene representations, as this has not been examined before using neuroimaging. On each trial, participants gradually built a scene image from three successive auditorily-presented object descriptions and an imagined 3D space. We then examined the neural dynamics associated with the points during scene construction when either SD or SA objects were being imagined. We found that SD objects elicited theta changes relative to SA objects in two brain regions, the right ventromedial prefrontal cortex (vmPFC) and the right superior temporal gyrus (STG). Furthermore, using dynamic causal modeling, we observed that the vmPFC drove STG activity. These findings may indicate that SD objects serve to activate schematic and conceptual knowledge in vmPFC and STG upon which scene representations are then built.
Collapse
Affiliation(s)
- Anna M Monk
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Gareth R Barnes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
123
|
Liu ZX, Rosenbaum RS, Ryan JD. Restricting Visual Exploration Directly Impedes Neural Activity, Functional Connectivity, and Memory. Cereb Cortex Commun 2020; 1:tgaa054. [PMID: 33154992 PMCID: PMC7595095 DOI: 10.1093/texcom/tgaa054] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 07/28/2020] [Accepted: 08/12/2020] [Indexed: 11/13/2022] Open
Abstract
We move our eyes to explore the visual world, extract information, and create memories. The number of gaze fixations-the stops that the eyes make-has been shown to correlate with activity in the hippocampus, a region critical for memory, and with later recognition memory. Here, we combined eyetracking with fMRI to provide direct evidence for the relationships between gaze fixations, neural activity, and memory during scene viewing. Compared to free viewing, fixating a single location reduced: 1) subsequent memory, 2) neural activity along the ventral visual stream into the hippocampus, 3) neural similarity between effects of subsequent memory and visual exploration, and 4) functional connectivity among the hippocampus, parahippocampal place area, and other cortical regions. Gaze fixations were uniquely related to hippocampal activity, even after controlling for neural effects due to subsequent memory. Therefore, this study provides key causal evidence supporting the notion that the oculomotor and memory systems are intrinsically related at both the behavioral and neural level. Individual gaze fixations may provide the basic unit of information on which memory binding processes operate.
Collapse
Affiliation(s)
- Zhong-Xu Liu
- Department of Behavioral Sciences, University of Michigan-Dearborn, Dearborn, Michigan 48128, USA
| | - R Shayna Rosenbaum
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada
| | - Jennifer D Ryan
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada
| |
Collapse
|
124
|
Henderson JM, Goold JE, Choi W, Hayes TR. Neural Correlates of Fixated Low- and High-level Scene Properties during Active Scene Viewing. J Cogn Neurosci 2020; 32:2013-2023. [PMID: 32573384 PMCID: PMC11164273 DOI: 10.1162/jocn_a_01599] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
During real-world scene perception, viewers actively direct their attention through a scene in a controlled sequence of eye fixations. During each fixation, local scene properties are attended, analyzed, and interpreted. What is the relationship between fixated scene properties and neural activity in the visual cortex? Participants inspected photographs of real-world scenes in an MRI scanner while their eye movements were recorded. Fixation-related fMRI was used to measure activation as a function of lower- and higher-level scene properties at fixation, operationalized as edge density and meaning maps, respectively. We found that edge density at fixation was most associated with activation in early visual areas, whereas semantic content at fixation was most associated with activation along the ventral visual stream including core object and scene-selective areas (lateral occipital complex, parahippocampal place area, occipital place area, and retrosplenial cortex). The observed activation from semantic content was not accounted for by differences in edge density. The results are consistent with active vision models in which fixation gates detailed visual analysis for fixated scene regions, and this gating influences both lower and higher levels of scene analysis.
Collapse
Affiliation(s)
| | | | - Wonil Choi
- Gwangju Institute of Science and Technology
| | | |
Collapse
|
125
|
Abstract
We show that the classical problem of three-dimensional (3D) size perception in obliquely viewed pictures can be understood by comparing human performance to the optimal geometric solution. A photograph seen from the camera position, can form the same retinal projection as the physical 3D scene, but retinal projections of sizes and shapes are distorted in oblique viewing. For real scenes, we previously showed that size and shape inconstancy result despite observers using the correct geometric back-transform, because some retinal images evoke misestimates of object slant or viewing elevation. Now, we examine how observers estimate 3D sizes in oblique views of pictures of objects lying on the ground in different poses. Compared to estimates for real scenes, in oblique views of pictures, sizes were seriously underestimated for objects at frontoparallel poses, but there was almost no change for objects perceived as pointing toward the viewer. The inverse of the function relating projected length to pose, camera elevation and viewing azimuth, gives the optimal correction factor for inferring correct 3D lengths if the elevation and azimuth are estimated accurately. Empirical correction functions had similar shapes to optimal, but lower amplitude. Measurements revealed that observers systematically underestimated viewing azimuth, similar to the frontoparallel bias for object pose perception. A model that adds underestimation of viewing azimuth to the geometrical back-transform, provided good fits to estimated 3D lengths from oblique views. These results add to accumulating evidence that observers use internalized projective geometry to perceive sizes, shapes, and poses in 3D scenes and their pictures.
Collapse
Affiliation(s)
- Akihito Maruya
- Graduate Center for Vision Research, State University of New York, New York, NY, USA.,
| | - Qasim Zaidi
- Graduate Center for Vision Research, State University of New York, New York, NY, USA.,
| |
Collapse
|
126
|
Castelhano MS, Krzyś K. Rethinking Space: A Review of Perception, Attention, and Memory in Scene Processing. Annu Rev Vis Sci 2020; 6:563-586. [PMID: 32491961 DOI: 10.1146/annurev-vision-121219-081745] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Scene processing is fundamentally influenced and constrained by spatial layout and spatial associations with objects. However, semantic information has played a vital role in propelling our understanding of real-world scene perception forward. In this article, we review recent advances in assessing how spatial layout and spatial relations influence scene processing. We examine the organization of the larger environment and how we take full advantage of spatial configurations independently of semantic information. We demonstrate that a clear differentiation of spatial from semantic information is necessary to advance research in the field of scene processing.
Collapse
Affiliation(s)
- Monica S Castelhano
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| | - Karolina Krzyś
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada;
| |
Collapse
|
127
|
Hill PF, King DR, Rugg MD. Age Differences In Retrieval-Related Reinstatement Reflect Age-Related Dedifferentiation At Encoding. Cereb Cortex 2020; 31:106-122. [PMID: 32829396 DOI: 10.1093/cercor/bhaa210] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 07/10/2020] [Accepted: 07/13/2020] [Indexed: 12/15/2022] Open
Abstract
Age-related reductions in neural selectivity have been linked to cognitive decline. We examined whether age differences in the strength of retrieval-related cortical reinstatement could be explained by analogous differences in neural selectivity at encoding, and whether reinstatement was associated with memory performance in an age-dependent or an age-independent manner. Young and older adults underwent fMRI as they encoded words paired with images of faces or scenes. During a subsequent scanned memory test participants judged whether test words were studied or unstudied and, for words judged studied, also made a source memory judgment about the associated image category. Using multi-voxel pattern similarity analyses, we identified robust evidence for reduced scene reinstatement in older relative to younger adults. This decline was however largely explained by age differences in neural differentiation at encoding; moreover, a similar relationship between neural selectivity at encoding and retrieval was evident in young participants. The results suggest that, regardless of age, the selectivity with which events are neurally processed at the time of encoding can determine the strength of retrieval-related cortical reinstatement.
Collapse
Affiliation(s)
- Paul F Hill
- Center for Vital Longevity, University of Texas at Dallas, 1600 Viceroy Dr. #800, Dallas, TX 75235.,School of Behavioral and Brain Sciences, University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080
| | - Danielle R King
- Center for Vital Longevity, University of Texas at Dallas, 1600 Viceroy Dr. #800, Dallas, TX 75235.,School of Behavioral and Brain Sciences, University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080
| | - Michael D Rugg
- Center for Vital Longevity, University of Texas at Dallas, 1600 Viceroy Dr. #800, Dallas, TX 75235.,School of Behavioral and Brain Sciences, University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080.,Department of Psychiatry, University of Texas Southwestern Medical Center, 6363 Forest Park Rd 7th floor suite 749, Dallas TX 75235.,School of Psychology, University of East Anglia, Norwich NR4 7TJ, UK
| |
Collapse
|
128
|
Nau M, Navarro Schröder T, Frey M, Doeller CF. Behavior-dependent directional tuning in the human visual-navigation network. Nat Commun 2020; 11:3247. [PMID: 32591544 PMCID: PMC7320013 DOI: 10.1038/s41467-020-17000-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 06/05/2020] [Indexed: 01/06/2023] Open
Abstract
The brain derives cognitive maps from sensory experience that guide memory formation and behavior. Despite extensive efforts, it still remains unclear how the underlying population activity unfolds during spatial navigation and how it relates to memory performance. To examine these processes, we combined 7T-fMRI with a kernel-based encoding model of virtual navigation to map world-centered directional tuning across the human cortex. First, we present an in-depth analysis of directional tuning in visual, retrosplenial, parahippocampal and medial temporal cortices. Second, we show that tuning strength, width and topology of this directional code during memory-guided navigation depend on successful encoding of the environment. Finally, we show that participants' locomotory state influences this tuning in sensory and mnemonic regions such as the hippocampus. We demonstrate a direct link between neural population tuning and human cognition, where high-level memory processing interacts with network-wide visuospatial coding in the service of behavior.
Collapse
Affiliation(s)
- Matthias Nau
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Trondheim, Norway.
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tobias Navarro Schröder
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Trondheim, Norway
| | - Markus Frey
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Trondheim, Norway
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Christian F Doeller
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, NTNU, Trondheim, Norway.
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
129
|
Danger is in the eyes of the beholder: The effect of visible and invisible affective faces on the judgment of social interactions. Cognition 2020; 203:104371. [PMID: 32569893 DOI: 10.1016/j.cognition.2020.104371] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 06/02/2020] [Accepted: 06/04/2020] [Indexed: 11/22/2022]
Abstract
Previous studies demonstrated that observation of facial expressions can modulate threat detection while looking at neutral or emotion-related scenes. Similarly, stimuli presented outside conscious awareness could influence social judgments of neutral novel stimuli. The two-fold aim of this study was: i) to evaluate whether observation of seen emotional faces could affect the judgment of social interactions without contextual cues (visible prime condition), and ii) whether this effect could also emerge when the emotional faces were made not visible by means of continuous flash suppression (invisible prime condition). We found that both seen and unseen faces are able to affect the judgment of ambiguous social interactions although this effect was particularly evident when affective faces were clearly visible. The present findings supported the idea that both conscious and unconscious processing of emotional faces have an important role in modulating perceivers' affective state and their judgment of social interactions.
Collapse
|
130
|
Zhang RY, Kay K. Flexible top-down modulation in human ventral temporal cortex. Neuroimage 2020; 218:116964. [PMID: 32439537 DOI: 10.1016/j.neuroimage.2020.116964] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 05/13/2020] [Accepted: 05/14/2020] [Indexed: 01/10/2023] Open
Abstract
Visual neuroscientists have long characterized attention as inducing a scaling or additive effect on fixed parametric functions describing neural responses (e.g., contrast response functions). Here, we instead propose that top-down effects are more complex and manifest in ways that depend not only on attention but also other cognitive processes involved in executing a task. To substantiate this theory, we analyze fMRI responses in human ventral temporal cortex (VTC) in a study where stimulus eccentricity and cognitive task are varied. We find that as stimuli are presented farther into the periphery, bottom-up stimulus-driven responses decline but top-down attentional enhancement increases substantially. This disproportionate enhancement of weak responses cannot be easily explained by conventional models of attention. Furthermore, we find that attentional effects depend on the specific cognitive task performed by the subject, indicating the influence of additional cognitive processes other than attention (e.g., decision-making). The effects we observe replicate in an independent experiment from the same study, and also generalize to a separate study involving different stimulus manipulations (contrast and phase coherence). Our results suggest that a quantitative understanding of top-down modulation requires more nuanced characterization of the multiple cognitive factors involved in completing a perceptual task.
Collapse
Affiliation(s)
- Ru-Yuan Zhang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200030, China; Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, 200030, China; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, 55455, USA.
| | - Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, 55455, USA
| |
Collapse
|
131
|
Valdés-Sosa M, Ontivero-Ortega M, Iglesias-Fuster J, Lage-Castellanos A, Gong J, Luo C, Castro-Laguardia AM, Bobes MA, Marinazzo D, Yao D. Objects seen as scenes: Neural circuitry for attending whole or parts. Neuroimage 2020; 210:116526. [DOI: 10.1016/j.neuroimage.2020.116526] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 12/10/2019] [Accepted: 01/06/2020] [Indexed: 01/03/2023] Open
|
132
|
Arbib MA. From spatial navigation via visual construction to episodic memory and imagination. BIOLOGICAL CYBERNETICS 2020; 114:139-167. [PMID: 32285205 PMCID: PMC7152744 DOI: 10.1007/s00422-020-00829-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Accepted: 03/25/2020] [Indexed: 06/11/2023]
Abstract
This hybrid of review and personal essay argues that models of visual construction are essential to extend spatial navigation models to models that link episodic memory and imagination. The starting point is the TAM-WG model, combining the Taxon Affordance Model and the World Graph model of spatial navigation. The key here is to reject approaches in which memory is restricted to unanalyzed views from familiar places, and their later recall. Instead, we will seek mechanisms for imagining truly novel scenes and episodes. We thus introduce a specific variant of schema theory and VISIONS, a cooperative computation model of visual scene understanding in which a scene is represented by an assemblage of schema instances with links to lower-level "patches" of relevant visual data. We sketch a new conceptual framework for future modeling, Visual Integration of Diverse Multi-Modal Aspects, by extending VISIONS from static scenes to episodes combining agents, actions and objects and assess its relevance to both navigation and episodic memory. We can then analyze imagination as a constructive process that combines aspects of memories of prior episodes along with other schemas and adjusts them into a coherent whole which, through expectations associated with diverse episodes and schemas, may yield the linkage of episodes that constitutes a dream or a narrative. The result is IBSEN, a conceptual model of Imagination in Brain Systems for Episodes and Navigation. The essay closes by analyzing other papers in this Special Issue to assess to what extent their results relate to the research proposed here.
Collapse
|
133
|
Neural correlates of retrieval-based enhancement of autobiographical memory in older adults. Sci Rep 2020; 10:1447. [PMID: 31996715 PMCID: PMC6989450 DOI: 10.1038/s41598-020-58076-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 01/08/2020] [Indexed: 11/08/2022] Open
Abstract
Lifelog photo review is considered to enhance the recall of personal events. While a sizable body of research has explored the neural basis of autobiographical memory (AM), there is limited neural evidence on the retrieval-based enhancement effect on event memory among older adults in the real-world environment. This study examined the neural processes of AM as was modulated by retrieval practice through lifelog photo review in older adults. In the experiment, blood-oxygen-level dependent response during subjects’ recall of recent events was recorded, where events were cued by photos that may or may not have been exposed to a priori retrieval practice (training). Subjects remembered more episodic details under the trained relative to non-trained condition. Importantly, the neural correlates of AM was exhibited by (1) dissociable cortical areas related to recollection and familiarity, and (2) a positive correlation between the amount of recollected episodic details and cortical activation within several lateral temporal and parietal regions. Further analysis of the brain activation pattern at a few regions of interest within the core remember network showed a training_condition × event_detail interaction effect, suggesting that the boosting effect of retrieval practice depended on the level of recollected event details.
Collapse
|