1
|
Li J, Hiersche KJ, Saygin ZM. Demystifying visual word form area visual and nonvisual response properties with precision fMRI. iScience 2024; 27:111481. [PMID: 39759006 PMCID: PMC11696768 DOI: 10.1016/j.isci.2024.111481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 06/05/2024] [Accepted: 11/22/2024] [Indexed: 01/07/2025] Open
Abstract
The visual word form area (VWFA) is a region in the left ventrotemporal cortex (VTC) whose specificity remains contentious. Using precision fMRI, we examine the VWFA's responses to numerous visual and nonvisual stimuli, comparing them to adjacent category-selective visual regions and regions involved in language and attentional demand. We find that VWFA responds moderately to non-word visual stimuli, but is unique within VTC in its pronounced selectivity for visual words. Interestingly, the VWFA is also the only category-selective visual region engaged in auditory language, unlike the ubiquitous attentional demand effect throughout the VTC. However, this language selectivity is dwarfed by its visual responses even to nonpreferred categories, indicating the VWFA is not a core (amodal) language region. We also observed two additional auditory language VTC clusters, but these had no specificity for visual words. Our detailed investigation clarifies longstanding controversies about the landscape of visual and auditory language functionality within VTC.
Collapse
Affiliation(s)
- Jin Li
- Department of Psychology, The Ohio State University, Columbus, OH 43210, USA
- Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, Columbus, OH 43210, USA
- School of Psychology, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Kelly J. Hiersche
- Department of Psychology, The Ohio State University, Columbus, OH 43210, USA
- Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, Columbus, OH 43210, USA
| | - Zeynep M. Saygin
- Department of Psychology, The Ohio State University, Columbus, OH 43210, USA
- Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
2
|
Küçük E, Foxwell M, Kaiser D, Pitcher D. Moving and Static Faces, Bodies, Objects, and Scenes Are Differentially Represented across the Three Visual Pathways. J Cogn Neurosci 2024; 36:2639-2651. [PMID: 38527070 PMCID: PMC11602004 DOI: 10.1162/jocn_a_02139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Models of human cortex propose the existence of neuroanatomical pathways specialized for different behavioral functions. These pathways include a ventral pathway for object recognition, a dorsal pathway for performing visually guided physical actions, and a recently proposed third pathway for social perception. In the current study, we tested the hypothesis that different categories of moving stimuli are differentially processed across the dorsal and third pathways according to their behavioral implications. Human participants (n = 30) were scanned with fMRI while viewing moving and static stimuli from four categories (faces, bodies, scenes, and objects). A whole-brain group analysis showed that moving bodies and moving objects increased neural responses in the bilateral posterior parietal cortex, parts of the dorsal pathway. By contrast, moving faces and moving bodies increased neural responses, the superior temporal sulcus, part of the third pathway. This pattern of results was also supported by a separate ROI analysis showing that moving stimuli produced more robust neural responses for all visual object categories, particularly in lateral and dorsal brain areas. Our results suggest that dynamic naturalistic stimuli from different categories are routed in specific visual pathways that process dissociable behavioral functions.
Collapse
Affiliation(s)
| | | | - Daniel Kaiser
- University of York
- Justus-Liebig-Universität Gießen
- Philipps-Universität Marburg and Justus-Liebig-Universität Gießen
| | | |
Collapse
|
3
|
Naveilhan C, Saulay-Carret M, Zory R, Ramanoël S. Spatial Contextual Information Modulates Affordance Processing and Early Electrophysiological Markers of Scene Perception. J Cogn Neurosci 2024; 36:2084-2099. [PMID: 39023371 DOI: 10.1162/jocn_a_02223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Scene perception allows humans to extract information from their environment and plan navigation efficiently. The automatic extraction of potential paths in a scene, also referred to as navigational affordance, is supported by scene-selective regions (SSRs) that enable efficient human navigation. Recent evidence suggests that the activity of these SSRs can be influenced by information from adjacent spatial memory areas. However, it remains unexplored how this contextual information could influence the extraction of bottom-up information, such as navigational affordances, from a scene and the underlying neural dynamics. Therefore, we analyzed ERPs in 26 young adults performing scene and spatial memory tasks in artificially generated rooms with varying numbers and locations of available doorways. We found that increasing the number of navigational affordances only impaired performance in the spatial memory task. ERP results showed a similar pattern of activity for both tasks, but with increased P2 amplitude in the spatial memory task compared with the scene memory. Finally, we reported no modulation of the P2 component by the number of affordances in either task. This modulation of early markers of visual processing suggests that the dynamics of SSR activity are influenced by a priori knowledge, with increased amplitude when participants have more contextual information about the perceived scene. Overall, our results suggest that prior spatial knowledge about the scene, such as the location of a goal, modulates early cortical activity associated with SSRs, and that this information may interact with bottom-up processing of scene content, such as navigational affordances.
Collapse
Affiliation(s)
| | | | - Raphaël Zory
- LAMHESS, Université Côte d'Azur, Nice, France
- Institut Universitaire de France (IUF)
| | - Stephen Ramanoël
- LAMHESS, Université Côte d'Azur, Nice, France
- INSERM, CNRS, Institut de la Vision, Sorbonne Université, Paris, France
| |
Collapse
|
4
|
Rolls ET, Treves A. A theory of hippocampal function: New developments. Prog Neurobiol 2024; 238:102636. [PMID: 38834132 DOI: 10.1016/j.pneurobio.2024.102636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/15/2024] [Accepted: 05/30/2024] [Indexed: 06/06/2024]
Abstract
We develop further here the only quantitative theory of the storage of information in the hippocampal episodic memory system and its recall back to the neocortex. The theory is upgraded to account for a revolution in understanding of spatial representations in the primate, including human, hippocampus, that go beyond the place where the individual is located, to the location being viewed in a scene. This is fundamental to much primate episodic memory and navigation: functions supported in humans by pathways that build 'where' spatial view representations by feature combinations in a ventromedial visual cortical stream, separate from those for 'what' object and face information to the inferior temporal visual cortex, and for reward information from the orbitofrontal cortex. Key new computational developments include the capacity of the CA3 attractor network for storing whole charts of space; how the correlations inherent in self-organizing continuous spatial representations impact the storage capacity; how the CA3 network can combine continuous spatial and discrete object and reward representations; the roles of the rewards that reach the hippocampus in the later consolidation into long-term memory in part via cholinergic pathways from the orbitofrontal cortex; and new ways of analysing neocortical information storage using Potts networks.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.
| | | |
Collapse
|
5
|
Rolls ET, Feng J, Zhang R. Selective activations and functional connectivities to the sight of faces, scenes, body parts and tools in visual and non-visual cortical regions leading to the human hippocampus. Brain Struct Funct 2024; 229:1471-1493. [PMID: 38839620 PMCID: PMC11176242 DOI: 10.1007/s00429-024-02811-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 05/22/2024] [Indexed: 06/07/2024]
Abstract
Connectivity maps are now available for the 360 cortical regions in the Human Connectome Project Multimodal Parcellation atlas. Here we add function to these maps by measuring selective fMRI activations and functional connectivity increases to stationary visual stimuli of faces, scenes, body parts and tools from 956 HCP participants. Faces activate regions in the ventrolateral visual cortical stream (FFC), in the superior temporal sulcus (STS) visual stream for face and head motion; and inferior parietal visual (PGi) and somatosensory (PF) regions. Scenes activate ventromedial visual stream VMV and PHA regions in the parahippocampal scene area; medial (7m) and lateral parietal (PGp) regions; and the reward-related medial orbitofrontal cortex. Body parts activate the inferior temporal cortex object regions (TE1p, TE2p); but also visual motion regions (MT, MST, FST); and the inferior parietal visual (PGi, PGs) and somatosensory (PF) regions; and the unpleasant-related lateral orbitofrontal cortex. Tools activate an intermediate ventral stream area (VMV3, VVC, PHA3); visual motion regions (FST); somatosensory (1, 2); and auditory (A4, A5) cortical regions. The findings add function to cortical connectivity maps; and show how stationary visual stimuli activate other cortical regions related to their associations, including visual motion, somatosensory, auditory, semantic, and orbitofrontal cortex value-related, regions.
Collapse
Affiliation(s)
- Edmund T Rolls
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China.
- Oxford Centre for Computational Neuroscience, Oxford, UK.
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China
| | - Ruohan Zhang
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
| |
Collapse
|
6
|
Kosakowski HL, Cohen MA, Herrera L, Nichoson I, Kanwisher N, Saxe R. Cortical Face-Selective Responses Emerge Early in Human Infancy. eNeuro 2024; 11:ENEURO.0117-24.2024. [PMID: 38871455 PMCID: PMC11258539 DOI: 10.1523/eneuro.0117-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 05/28/2024] [Accepted: 06/04/2024] [Indexed: 06/15/2024] Open
Abstract
In human adults, multiple cortical regions respond robustly to faces, including the occipital face area (OFA) and fusiform face area (FFA), implicated in face perception, and the superior temporal sulcus (STS) and medial prefrontal cortex (MPFC), implicated in higher-level social functions. When in development, does face selectivity arise in each of these regions? Here, we combined two awake infant functional magnetic resonance imaging (fMRI) datasets to create a sample size twice the size of previous reports (n = 65 infants; 2.6-9.6 months). Infants watched movies of faces, bodies, objects, and scenes, while fMRI data were collected. Despite variable amounts of data from each infant, individual subject whole-brain activation maps revealed responses to faces compared to nonface visual categories in the approximate location of OFA, FFA, STS, and MPFC. To determine the strength and nature of face selectivity in these regions, we used cross-validated functional region of interest analyses. Across this larger sample size, face responses in OFA, FFA, STS, and MPFC were significantly greater than responses to bodies, objects, and scenes. Even the youngest infants (2-5 months) showed significantly face-selective responses in FFA, STS, and MPFC, but not OFA. These results demonstrate that face selectivity is present in multiple cortical regions within months of birth, providing powerful constraints on theories of cortical development.
Collapse
Affiliation(s)
- Heather L Kosakowski
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| | - Michael A Cohen
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Psychology and Program in Neuroscience, Amherst College, Amherst, Massachusetts 01002
| | - Lyneé Herrera
- Psychology Department, University of Denver, Denver, Colorado 80210
| | - Isabel Nichoson
- Tulane Brain Institute, Tulane University, New Orleans, Louisiana 70118
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| |
Collapse
|
7
|
Antypa D, Kafetsios K, Simos P, Kyvelea M, Kosteletou E, Maris T, Papadaki E, Hess U. Distinct neural correlates of accuracy and bias in the perception of facial emotion expressions. Soc Neurosci 2024; 19:215-228. [PMID: 39297912 DOI: 10.1080/17470919.2024.2403187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 08/09/2024] [Indexed: 10/26/2024]
Abstract
We investigated neural correlates of Emotion Recognition Accuracy (ERA) using the Assessment of Contextualized Emotions (ACE). ACE infuses context by presenting emotion expressions in a naturalistic group setting and distinguishes between accurately perceiving intended emotions (signal), and bias due to perceiving additional, secondary emotions (noise). This social perception process is argued to induce perspective taking in addition to pattern matching in ERA. Thirty participants were presented with an fMRI-compatible adaptation of the ACE consisting of blocks of neutral and emotional faces in single and group-embedded settings. Participants rated the central character's expressions categorically or using scalar scales in consequent fMRI scans. Distinct brain activations were associated with the perception of emotional vs. neutral faces in the four conditions. Moreover, accuracy and bias scores from the original ACE task performed on another day were associated with brain activation during the scalar (vs. categorical) condition for emotional (vs. neutral) faces embedded in group. These findings suggest distinct cognitive mechanisms linked to each type of emotional rating and highlight the importance of considering cognitive bias in the assessment of social emotion perception.
Collapse
Affiliation(s)
- Despina Antypa
- Medical School, University of Crete, Voutes Campus, Heraklion, Greece
- Department of Psychology, University of Crete, Rethymno, Greece
- Department of Psychology, University of Geneva, Geneva, Switzerland
- Swiss Center of Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Konstantinos Kafetsios
- School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Psychology Department, Palacký University, Olomouc, Czechia
| | - Panagiotis Simos
- Medical School, University of Crete, Voutes Campus, Heraklion, Greece
- Institute of Computer Science, Foundation for Research and Technology, Hellas, Greece
| | - Marina Kyvelea
- Medical School, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouela Kosteletou
- Institute of Applied and Computational Mathematics, Foundation for Research and Technology, Hellas, Greece
- Department of Psychology, University of Barcelona, Barcelona, Spain
| | - Thomas Maris
- Medical School, University of Crete, Voutes Campus, Heraklion, Greece
- Institute of Computer Science, Foundation for Research and Technology, Hellas, Greece
| | - Efrosini Papadaki
- Medical School, University of Crete, Voutes Campus, Heraklion, Greece
- Institute of Computer Science, Foundation for Research and Technology, Hellas, Greece
| | - Ursula Hess
- Department of Psychology, Humboldt University, Berlin, Germany
| |
Collapse
|
8
|
Rolls ET. Two what, two where, visual cortical streams in humans. Neurosci Biobehav Rev 2024; 160:105650. [PMID: 38574782 DOI: 10.1016/j.neubiorev.2024.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/25/2024] [Accepted: 03/31/2024] [Indexed: 04/06/2024]
Abstract
ROLLS, E. T. Two What, Two Where, Visual Cortical Streams in Humans. NEUROSCI BIOBEHAV REV 2024. Recent cortical connectivity investigations lead to new concepts about 'What' and 'Where' visual cortical streams in humans, and how they connect to other cortical systems. A ventrolateral 'What' visual stream leads to the inferior temporal visual cortex for object and face identity, and provides 'What' information to the hippocampal episodic memory system, the anterior temporal lobe semantic system, and the orbitofrontal cortex emotion system. A superior temporal sulcus (STS) 'What' visual stream utilising connectivity from the temporal and parietal visual cortex responds to moving objects and faces, and face expression, and connects to the orbitofrontal cortex for emotion and social behaviour. A ventromedial 'Where' visual stream builds feature combinations for scenes, and provides 'Where' inputs via the parahippocampal scene area to the hippocampal episodic memory system that are also useful for landmark-based navigation. The dorsal 'Where' visual pathway to the parietal cortex provides for actions in space, but also provides coordinate transforms to provide inputs to the parahippocampal scene area for self-motion update of locations in scenes in the dark or when the view is obscured.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| |
Collapse
|
9
|
Walbrin J, Downing PE, Sotero FD, Almeida J. Characterizing the discriminability of visual categorical information in strongly connected voxels. Neuropsychologia 2024; 195:108815. [PMID: 38311112 DOI: 10.1016/j.neuropsychologia.2024.108815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 01/06/2024] [Accepted: 02/01/2024] [Indexed: 02/06/2024]
Abstract
Functional brain responses are strongly influenced by connectivity. Recently, we demonstrated a major example of this: category discriminability within occipitotemporal cortex (OTC) is enhanced for voxel sets that share strong functional connectivity to distal brain areas, relative to those that share lesser connectivity. That is, within OTC regions, sets of 'most-connected' voxels show improved multivoxel pattern discriminability for tool-, face-, and place stimuli relative to voxels with weaker connectivity to the wider brain. However, understanding whether these effects generalize to other domains (e.g. body perception network), and across different levels of the visual processing streams (e.g. dorsal as well as ventral stream areas) is an important extension of this work. Here, we show that this so-called connectivity-guided decoding (CGD) effect broadly generalizes across a wide range of categories (tools, faces, bodies, hands, places). This effect is robust across dorsal stream areas, but less consistent in earlier ventral stream areas. In the latter regions, category discriminability is generally very high, suggesting that extraction of category-relevant visual properties is less reliant on connectivity to downstream areas. Further, CGD effects are primarily expressed in a category-specific manner: For example, within the network of tool regions, discriminability of tool information is greater than non-tool information. The connectivity-guided decoding approach shown here provides a novel demonstration of the crucial relationship between wider brain connectivity and complex local-level functional responses at different levels of the visual processing streams. Further, this approach generates testable new hypotheses about the relationships between connectivity and local selectivity.
Collapse
Affiliation(s)
- Jon Walbrin
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.
| | - Paul E Downing
- School of Human and Behavioural Sciences, Bangor University, Bangor, Wales
| | - Filipa Dourado Sotero
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| |
Collapse
|
10
|
Abassi E, Papeo L. Category-Selective Representation of Relationships in the Visual Cortex. J Neurosci 2024; 44:e0250232023. [PMID: 38124013 PMCID: PMC10860595 DOI: 10.1523/jneurosci.0250-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 09/29/2023] [Accepted: 10/14/2023] [Indexed: 12/23/2023] Open
Abstract
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Collapse
Affiliation(s)
- Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron 69675, France
| |
Collapse
|
11
|
Pitcher D, Ianni GR, Holiday K, Ungerleider LG. Identifying the cortical face network with dynamic face stimuli: A large group fMRI study. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559583. [PMID: 37886588 PMCID: PMC10602036 DOI: 10.1101/2023.09.26.559583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Functional magnetic resonance imaging (fMRI) studies have identified a network of face-selective regions distributed across the human brain. In the present study, we analyzed data from a large group of gender-balanced participants to investigate how reliably these face-selective regions could be identified across both cerebral hemispheres. Participants ( N =52) were scanned with fMRI while viewing short videos of faces, bodies, and objects. Results revealed that five face-selective regions: the fusiform face area (FFA), posterior superior temporal sulcus (pSTS), anterior superior temporal sulcus (aSTS), inferior frontal gyrus (IFG) and the amygdala were all larger in the right than in the left hemisphere. The occipital face area (OFA) was larger in the right hemisphere as well, but the difference between the hemispheres was not significant. The neural response to moving faces was also greater in face-selective regions in the right than in the left hemisphere. An additional analysis revealed that the pSTS and IFG were significantly larger in the right hemisphere compared to other face-selective regions. This pattern of results demonstrates that moving faces are preferentially processed in the right hemisphere and that the pSTS and IFG appear to be the strongest drivers of this laterality. An analysis of gender revealed that face-selective regions were typically larger in females ( N =26) than males ( N =26), but this gender difference was not statistically significant.
Collapse
|
12
|
Rolls ET. Emotion, motivation, decision-making, the orbitofrontal cortex, anterior cingulate cortex, and the amygdala. Brain Struct Funct 2023; 228:1201-1257. [PMID: 37178232 PMCID: PMC10250292 DOI: 10.1007/s00429-023-02644-9] [Citation(s) in RCA: 57] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 04/12/2023] [Indexed: 05/15/2023]
Abstract
The orbitofrontal cortex and amygdala are involved in emotion and in motivation, but the relationship between these functions performed by these brain structures is not clear. To address this, a unified theory of emotion and motivation is described in which motivational states are states in which instrumental goal-directed actions are performed to obtain rewards or avoid punishers, and emotional states are states that are elicited when the reward or punisher is or is not received. This greatly simplifies our understanding of emotion and motivation, for the same set of genes and associated brain systems can define the primary or unlearned rewards and punishers such as sweet taste or pain. Recent evidence on the connectivity of human brain systems involved in emotion and motivation indicates that the orbitofrontal cortex is involved in reward value and experienced emotion with outputs to cortical regions including those involved in language, and is a key brain region involved in depression and the associated changes in motivation. The amygdala has weak effective connectivity back to the cortex in humans, and is implicated in brainstem-mediated responses to stimuli such as freezing and autonomic activity, rather than in declarative emotion. The anterior cingulate cortex is involved in learning actions to obtain rewards, and with the orbitofrontal cortex and ventromedial prefrontal cortex in providing the goals for navigation and in reward-related effects on memory consolidation mediated partly via the cholinergic system.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK.
- Department of Computer Science, University of Warwick, Coventry, UK.
| |
Collapse
|
13
|
Nentwich M, Leszczynski M, Russ BE, Hirsch L, Markowitz N, Sapru K, Schroeder CE, Mehta AD, Bickel S, Parra LC. Semantic novelty modulates neural responses to visual change across the human brain. Nat Commun 2023; 14:2910. [PMID: 37217478 PMCID: PMC10203305 DOI: 10.1038/s41467-023-38576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 05/08/2023] [Indexed: 05/24/2023] Open
Abstract
Our continuous visual experience in daily life is dominated by change. Previous research has focused on visual change due to stimulus motion, eye movements or unfolding events, but not their combined impact across the brain, or their interactions with semantic novelty. We investigate the neural responses to these sources of novelty during film viewing. We analyzed intracranial recordings in humans across 6328 electrodes from 23 individuals. Responses associated with saccades and film cuts were dominant across the entire brain. Film cuts at semantic event boundaries were particularly effective in the temporal and medial temporal lobe. Saccades to visual targets with high visual novelty were also associated with strong neural responses. Specific locations in higher-order association areas showed selectivity to either high or low-novelty saccades. We conclude that neural activity associated with film cuts and eye movements is widespread across the brain and is modulated by semantic novelty.
Collapse
Affiliation(s)
- Maximilian Nentwich
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA.
| | - Marcin Leszczynski
- Departments of Psychiatry and Neurology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- Cognitive Science Department, Institute of Philosophy, Jagiellonian University, Kraków, Poland
| | - Brian E Russ
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine, New York, NY, USA
- Department of Psychiatry, New York University at Langone, New York, NY, USA
| | - Lukas Hirsch
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| | - Noah Markowitz
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
| | - Kaustubh Sapru
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA
| | - Charles E Schroeder
- Departments of Psychiatry and Neurology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
| | - Ashesh D Mehta
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Departments of Neurosurgery and Neurology, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Stephan Bickel
- Translational Neuroscience Lab Division, Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Departments of Neurosurgery and Neurology, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of New York, New York, NY, USA.
| |
Collapse
|
14
|
Rolls ET, Rauschecker JP, Deco G, Huang CC, Feng J. Auditory cortical connectivity in humans. Cereb Cortex 2023; 33:6207-6227. [PMID: 36573464 PMCID: PMC10422925 DOI: 10.1093/cercor/bhac496] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/28/2022] Open
Abstract
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057, USA
- Institute for Advanced Study, Technical University, Munich, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
15
|
Rolls ET. Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans. Hippocampus 2023; 33:533-572. [PMID: 36070199 PMCID: PMC10946493 DOI: 10.1002/hipo.23467] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/16/2022] [Accepted: 08/16/2022] [Indexed: 01/08/2023]
Abstract
Hippocampal and parahippocampal gyrus spatial view neurons in primates respond to the spatial location being looked at. The representation is allocentric, in that the responses are to locations "out there" in the world, and are relatively invariant with respect to retinal position, eye position, head direction, and the place where the individual is located. The underlying connectivity in humans is from ventromedial visual cortical regions to the parahippocampal scene area, leading to the theory that spatial view cells are formed by combinations of overlapping feature inputs self-organized based on their closeness in space. Thus, although spatial view cells represent "where" for episodic memory and navigation, they are formed by ventral visual stream feature inputs in the parahippocampal gyrus in what is the parahippocampal scene area. A second "where" driver of spatial view cells are parietal inputs, which it is proposed provide the idiothetic update for spatial view cells, used for memory recall and navigation when the spatial view details are obscured. Inferior temporal object "what" inputs and orbitofrontal cortex reward inputs connect to the human hippocampal system, and in macaques can be associated in the hippocampus with spatial view cell "where" representations to implement episodic memory. Hippocampal spatial view cells also provide a basis for navigation to a series of viewed landmarks, with the orbitofrontal cortex reward inputs to the hippocampus providing the goals for navigation, which can then be implemented by hippocampal connectivity in humans to parietal cortex regions involved in visuomotor actions in space. The presence of foveate vision and the highly developed temporal lobe for object and scene processing in primates including humans provide a basis for hippocampal spatial view cells to be key to understanding episodic memory in the primate and human hippocampus, and the roles of this system in primate including human navigation.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
16
|
Breu MS, Ramezanpour H, Dicke PW, Thier P. A frontoparietal network for volitional control of gaze following. Eur J Neurosci 2023; 57:1723-1735. [PMID: 36967647 DOI: 10.1111/ejn.15975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 03/13/2023] [Accepted: 03/22/2023] [Indexed: 03/29/2023]
Abstract
Gaze following is a major element of non-verbal communication and important for successful social interactions. Human gaze following is a fast and almost reflex-like behaviour, yet it can be volitionally controlled and suppressed to some extent if inappropriate or unnecessary, given the social context. In order to identify the neural basis of the cognitive control of gaze following, we carried out an event-related fMRI experiment, in which human subjects' eye movements were tracked while they were exposed to gaze cues in two distinct contexts: A baseline gaze following condition in which subjects were instructed to use gaze cues to shift their attention to a gazed-at spatial target and a control condition in which the subjects were required to ignore the gaze cue and instead to shift their attention to a distinct spatial target to be selected based on a colour mapping rule, requiring the suppression of gaze following. We could identify a suppression-related blood-oxygen-level-dependent (BOLD) response in a frontoparietal network comprising dorsolateral prefrontal cortex (dlPFC), orbitofrontal cortex (OFC), the anterior insula, precuneus, and posterior parietal cortex (PPC). These findings suggest that overexcitation of frontoparietal circuits in turn suppressing the gaze following patch might be a potential cause of gaze following deficits in clinical populations.
Collapse
Affiliation(s)
- M S Breu
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - H Ramezanpour
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - P W Dicke
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - P Thier
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
17
|
Aviv V. Moving through silence in dance: A neural perspective. PROGRESS IN BRAIN RESEARCH 2023; 280:89-101. [PMID: 37714574 DOI: 10.1016/bs.pbr.2022.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/17/2023]
Abstract
The word "silence" typically refers to the auditory modality, signifying an absence of sound or noise, being quiet. One may then ask: could we attribute the notion of silence to the domain of dance, e.g., when a movement is absent and the dancer stops moving? Is it at all useful to think in terms of silence when referring to dance? In this chapter, my exploration of these questions is based on recent studies in brain research, which demonstrate the remarkable facility of specific regions in the human brain to perceive visually referred biological and, in particular, human motion, leading to prediction of future movements of the human body. I will argue that merely ceasing motion is an insufficient condition for creating a perception of silence in the mind of a spectator of dance. Rather, the experience of silence in dance is a special situation where the static position of the dancer does not imply motion, and is unlikely to evoke interpretation of the intentions or the emotional expression of the dancer. For this to happen, the position of the dancer, while being still, should be held effortlessly, aimlessly, and with a minimal expression of emotion and intention. Furthermore, I suggest that dynamics, repetitive movement (such as that of Sufi whirling dervishes), can also be perceived as silence in dance because of the high level of predictability and evenness of the movement. These moments of silence in dance, which are so rare in our daily lives, invite us to experience the human body from a new, "out of the box" perspective that is the essence of all the arts.
Collapse
Affiliation(s)
- Vered Aviv
- The Jerusalem Academy of Music and Dance, Jerusalem, Israel.
| |
Collapse
|
18
|
Bognár A, Raman R, Taubert N, Zafirova Y, Li B, Giese M, De Gelder B, Vogels R. The contribution of dynamics to macaque body and face patch responses. Neuroimage 2023; 269:119907. [PMID: 36717042 PMCID: PMC9986793 DOI: 10.1016/j.neuroimage.2023.119907] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 12/20/2022] [Accepted: 01/26/2023] [Indexed: 01/29/2023] Open
Abstract
Previous functional imaging studies demonstrated body-selective patches in the primate visual temporal cortex, comparing activations to static bodies and static images of other categories. However, the use of static instead of dynamic displays of moving bodies may have underestimated the extent of the body patch network. Indeed, body dynamics provide information about action and emotion and may be processed in patches not activated by static images. Thus, to map with fMRI the full extent of the macaque body patch system in the visual temporal cortex, we employed dynamic displays of natural-acting monkey bodies, dynamic monkey faces, objects, and scrambled versions of these videos, all presented during fixation. We found nine body patches in the visual temporal cortex, starting posteriorly in the superior temporal sulcus (STS) and ending anteriorly in the temporal pole. Unlike for static images, body patches were present consistently in both the lower and upper banks of the STS. Overall, body patches showed a higher activation by dynamic displays than by matched static images, which, for identical stimulus displays, was less the case for the neighboring face patches. These data provide the groundwork for future single-unit recording studies to reveal the spatiotemporal features the neurons of these body patches encode. These fMRI findings suggest that dynamics have a stronger contribution to population responses in body than face patches.
Collapse
Affiliation(s)
- A Bognár
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - R Raman
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - N Taubert
- Department of Cognitive Neurology, University of Tuebingen, Tuebingen, Germany
| | - Y Zafirova
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - B Li
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - M Giese
- Department of Cognitive Neurology, University of Tuebingen, Tuebingen, Germany
| | - B De Gelder
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Computer Science, University College London, London, UK
| | - R Vogels
- Deparment of Neurosciences, KU Leuven, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium.
| |
Collapse
|
19
|
Lima B, Florentino MM, Fiorani M, Soares JGM, Schmidt KE, Neuenschwander S, Baron J, Gattass R. Cortical maps as a fundamental neural substrate for visual representation. Prog Neurobiol 2023; 224:102424. [PMID: 36828036 DOI: 10.1016/j.pneurobio.2023.102424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 01/20/2023] [Accepted: 02/18/2023] [Indexed: 02/25/2023]
Abstract
Visual perception is the product of serial hierarchical processing, parallel processing, and remapping on a dynamic network involving several topographically organized cortical visual areas. Here, we will focus on the topographical organization of cortical areas and the different kinds of visual maps found in the primate brain. We will interpret our findings in light of a broader representational framework for perception. Based on neurophysiological data, our results do not support the notion that vision can be explained by a strict representational model, where the objective visual world is faithfully represented in our brain. On the contrary, we find strong evidence that vision is an active and constructive process from the very initial stages taking place in the eye and from the very initial stages of our development. A constructive interplay between perceptual and motor systems (e.g., during saccadic eye movements) is actively learnt from early infancy and ultimately provides our fluid stable visual perception of the world.
Collapse
Affiliation(s)
- Bruss Lima
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Maria M Florentino
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Mario Fiorani
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Juliana G M Soares
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil
| | - Kerstin E Schmidt
- Instituto do Cérebro, Universidade Federal do Rio Grande do Norte, Natal, RN 59056-450, Brazil
| | - Sergio Neuenschwander
- Instituto do Cérebro, Universidade Federal do Rio Grande do Norte, Natal, RN 59056-450, Brazil
| | - Jerome Baron
- Departamento de Fisiologia e Biofísica, Instituto de Ciências Biológicas, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Ricardo Gattass
- Programa de Neurobiologia, Instituto de Biofísica Carlos Chagas Filho, Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil.
| |
Collapse
|
20
|
Landsiedel J, Daughters K, Downing PE, Koldewyn K. The role of motion in the neural representation of social interactions in the posterior temporal cortex. Neuroimage 2022; 262:119533. [PMID: 35931309 PMCID: PMC9485464 DOI: 10.1016/j.neuroimage.2022.119533] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/15/2022] [Accepted: 08/01/2022] [Indexed: 11/30/2022] Open
Abstract
Humans are an inherently social species, with multiple focal brain regions sensitive to various visual social cues such as faces, bodies, and biological motion. More recently, research has begun to investigate how the brain responds to more complex, naturalistic social scenes, identifying a region in the posterior superior temporal sulcus (SI-pSTS; i.e., social interaction pSTS), amongst others, as an important region for processing social interaction. This research, however, has presented images or videos, and thus the contribution of motion to social interaction perception in these brain regions is not yet understood. In the current study, 22 participants viewed videos, image sequences, scrambled image sequences and static images of either social interactions or non-social independent actions. Combining univariate and multivariate analyses, we confirm that bilateral SI-pSTS plays a central role in dynamic social interaction perception but is much less involved when 'interactiveness' is conveyed solely with static cues. Regions in the social brain, including SI-pSTS and extrastriate body area (EBA), showed sensitivity to both motion and interactive content. While SI-pSTS is somewhat more tuned to video interactions than is EBA, both bilateral SI-pSTS and EBA showed a greater response to social interactions compared to non-interactions and both regions responded more strongly to videos than static images. Indeed, both regions showed higher responses to interactions than independent actions in videos and intact sequences, but not in other conditions. Exploratory multivariate regression analyses suggest that selectivity for simple visual motion does not in itself drive interactive sensitivity in either SI-pSTS or EBA. Rather, selectivity for interactions expressed in point-light animations, and selectivity for static images of bodies, make positive and independent contributions to this effect across the LOTC region. Our results strongly suggest that EBA and SI-pSTS work together during dynamic interaction perception, at least when interactive information is conveyed primarily via body information. As such, our results are also in line with proposals of a third visual stream supporting dynamic social scene perception.
Collapse
Affiliation(s)
| | | | - Paul E Downing
- School of Human and Behavioural Sciences, Bangor University
| | - Kami Koldewyn
- School of Human and Behavioural Sciences, Bangor University.
| |
Collapse
|
21
|
Investigation of Brain Activation Patterns Related to the Feminization or Masculinization of Body and Face Images across Genders. Tomography 2022; 8:2093-2106. [PMID: 36006074 PMCID: PMC9416062 DOI: 10.3390/tomography8040176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 08/08/2022] [Accepted: 08/16/2022] [Indexed: 11/17/2022] Open
Abstract
Previous studies demonstrated sex-related differences in several areas of the human brain, including patterns of brain activation in males and females when observing their own bodies and faces (versus other bodies/faces or morphed versions of themselves), but a complex paradigm touching multiple aspects of embodied self-identity is still lacking. We enrolled 24 healthy individuals (12 M, 12 F) in 3 different fMRI experiments: the vision of prototypical body silhouettes, the vision of static images of the face of the participants morphed with prototypical male and female faces, the vision of short videos showing the dynamic transformation of the morphing. We found differential sexual activations in areas linked to self-identity and to the ability to attribute mental states: In Experiment 1, the male group activated more the bilateral thalamus when looking at sex congruent body images, while the female group activated more the middle and inferior temporal gyrus. In Experiment 2, the male group activated more the supplementary motor area when looking at their faces; the female group activated more the dorsomedial prefrontal cortex (dmPFC). In Experiment 3, the female group activated more the dmPFC when observing either the feminization or the masculinization of their face. The defeminization produced more activations in females in the left superior parietal lobule and middle occipital gyrus. The performance of all classifiers built using single ROIs exceeded chance level, reaching an area under the ROC curves > 0.85 in some cases (notably, for Experiment 2 using the V1 ROI). The results of the fMRI tasks showed good agreement with previously published studies, even if our sample size was small. Therefore, our functional MRI protocol showed significantly different patterns of activation in males and females, but further research is needed both to investigate the gender-related differences in activation when observing a morphing of their face/body, and to validate our paradigm using a larger sample.
Collapse
|
22
|
Rolls ET, Deco G, Huang CC, Feng J. Multiple cortical visual streams in humans. Cereb Cortex 2022; 33:3319-3349. [PMID: 35834308 DOI: 10.1093/cercor/bhac276] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 11/14/2022] Open
Abstract
The effective connectivity between 55 visual cortical regions and 360 cortical regions was measured in 171 HCP participants using the HCP-MMP atlas, and complemented with functional connectivity and diffusion tractography. A Ventrolateral Visual "What" Stream for object and face recognition projects hierarchically to the inferior temporal visual cortex, which projects to the orbitofrontal cortex for reward value and emotion, and to the hippocampal memory system. A Ventromedial Visual "Where" Stream for scene representations connects to the parahippocampal gyrus and hippocampus. An Inferior STS (superior temporal sulcus) cortex Semantic Stream receives from the Ventrolateral Visual Stream, from visual inferior parietal PGi, and from the ventromedial-prefrontal reward system and connects to language systems. A Dorsal Visual Stream connects via V2 and V3A to MT+ Complex regions (including MT and MST), which connect to intraparietal regions (including LIP, VIP and MIP) involved in visual motion and actions in space. It performs coordinate transforms for idiothetic update of Ventromedial Stream scene representations. A Superior STS cortex Semantic Stream receives visual inputs from the Inferior STS Visual Stream, PGi, and STV, and auditory inputs from A5, is activated by face expression, motion and vocalization, and is important in social behaviour, and connects to language systems.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom.,Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom.,Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| | - Gustavo Deco
- Computational Neuroscience Group, Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain.,Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain.,Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China.,Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, United Kingdom.,Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
23
|
Nikel L, Sliwinska MW, Kucuk E, Ungerleider LG, Pitcher D. Measuring the response to visually presented faces in the human lateral prefrontal cortex. Cereb Cortex Commun 2022; 3:tgac036. [PMID: 36159205 PMCID: PMC9491845 DOI: 10.1093/texcom/tgac036] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/12/2022] [Accepted: 08/14/2022] [Indexed: 12/04/2022] Open
Abstract
Neuroimaging studies identify multiple face-selective areas in the human brain. In the current study, we compared the functional response of the face area in the lateral prefrontal cortex to that of other face-selective areas. In Experiment 1, participants (n = 32) were scanned viewing videos containing faces, bodies, scenes, objects, and scrambled objects. We identified a face-selective area in the right inferior frontal gyrus (rIFG). In Experiment 2, participants (n = 24) viewed the same videos or static images. Results showed that the rIFG, right posterior superior temporal sulcus (rpSTS), and right occipital face area (rOFA) exhibited a greater response to moving than static faces. In Experiment 3, participants (n = 18) viewed face videos in the contralateral and ipsilateral visual fields. Results showed that the rIFG and rpSTS showed no visual field bias, while the rOFA and right fusiform face area (rFFA) showed a contralateral bias. These experiments suggest two conclusions; firstly, in all three experiments, the face area in the IFG was not as reliably identified as face areas in the occipitotemporal cortex. Secondly, the similarity of the response profiles in the IFG and pSTS suggests the areas may perform similar cognitive functions, a conclusion consistent with prior neuroanatomical and functional connectivity evidence.
Collapse
Affiliation(s)
- Lara Nikel
- Department of Psychology, University of York, Heslington , York YO10 5DD , UK
| | | | - Emel Kucuk
- Department of Psychology, University of York, Heslington , York YO10 5DD , UK
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health , Bethesda, MD, 20892 , USA
| | - David Pitcher
- Department of Psychology, University of York, Heslington , York YO10 5DD , UK
| |
Collapse
|
24
|
Duarte JV, Abreu R, Castelo-Branco M. A two-stage framework for neural processing of biological motion. Neuroimage 2022; 259:119403. [PMID: 35738331 DOI: 10.1016/j.neuroimage.2022.119403] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 05/18/2022] [Accepted: 06/19/2022] [Indexed: 11/26/2022] Open
Abstract
It remains to be understood how biological motion is hierarchically computed, from discrimination of local biological motion animacy to global dynamic body perception. Here, we addressed this functional separation of the correlates of the perception of local biological motion from perception of global motion of a body. We hypothesized that local biological motion processing can be isolated, by using a single dot motion perceptual decision paradigm featuring the biomechanical details of local realistic motion of a single joint. To ensure that we were indeed tackling processing of biological motion properties we used a discrimination instead of detection task. We discovered using representational similarity analysis that two key early dorsal and two ventral stream regions (visual motion selective hMT+ and V3A, extrastriate body area EBA and a region within fusiform gyrus FFG) showed robust and separable signals related to encoding of local biological motion and global motion-mediated shape. These signals reflected two independent processing stages, as revealed by representational similarity analysis and deconvolution of fMRI responses to each motion pattern. This study showed that higher level pSTS encodes both classes of biological motion in a similar way, revealing a higher-level integrative stage, reflecting scale independent biological motion perception. Our results reveal a two-stage framework for neural computation of biological motion, with an independent contribution of dorsal and ventral regions for the initial stage.
Collapse
Affiliation(s)
- João Valente Duarte
- Centre of Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal; Faculty of Medicine, University of Coimbra, Portugal
| | - Rodolfo Abreu
- Centre of Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal
| | - Miguel Castelo-Branco
- Centre of Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Portugal; Faculty of Medicine, University of Coimbra, Portugal.
| |
Collapse
|
25
|
Sliwinska MW, Searle LR, Earl M, O'Gorman D, Pollicina G, Burton AM, Pitcher D. Face learning via brief real-world social interactions includes changes in face-selective brain areas and hippocampus. Perception 2022; 51:521-538. [PMID: 35542977 PMCID: PMC9396469 DOI: 10.1177/03010066221098728] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Making new acquaintances requires learning to recognise previously unfamiliar faces. In the current study, we investigated this process by staging real-world social interactions between actors and the participants. Participants completed a face-matching behavioural task in which they matched photographs of the actors (whom they had yet to meet), or faces similar to the actors (henceforth called foils). Participants were then scanned using functional magnetic resonance imaging (fMRI) while viewing photographs of actors and foils. Immediately after exiting the scanner, participants met the actors for the first time and interacted with them for 10 min. On subsequent days, participants completed a second behavioural experiment and then a second fMRI scan. Prior to each session, actors again interacted with the participants for 10 min. Behavioural results showed that social interactions improved performance accuracy when matching actor photographs, but not foil photographs. The fMRI analysis revealed a difference in the neural response to actor photographs and foil photographs across all regions of interest (ROIs) only after social interactions had occurred. Our results demonstrate that short social interactions were sufficient to learn and discriminate previously unfamiliar individuals. Moreover, these learning effects were present in brain areas involved in face processing and memory.
Collapse
Affiliation(s)
- Magdalena W Sliwinska
- School of Psychology, 4589Liverpool John Moores University, UK.,Department of Psychology, University of York, UK
| | | | - Megan Earl
- Department of Psychology, University of York, UK
| | | | | | | | | |
Collapse
|
26
|
Scanlon JEM, Jacobsen NSJ, Maack MC, Debener S. Stepping in time: Alpha-mu and beta oscillations during a walking synchronization task. Neuroimage 2022; 253:119099. [PMID: 35301131 DOI: 10.1016/j.neuroimage.2022.119099] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 02/18/2022] [Accepted: 03/13/2022] [Indexed: 11/25/2022] Open
Abstract
Interpersonal behavioral synchrony is referred to as temporal coordination of action between two or more individuals. Humans tend to synchronize their movements during repetitive movement tasks such as walking. Mobile EEG technology now allows us to examine how this happens during gait. 18 participants equipped with foot accelerometers and mobile EEG walked with an experimenter in three conditions: With their view of the experimenter blocked, walking naturally, and trying to synchronize their steps with the experimenter. The experimenter walked following a headphone metronome to keep their steps consistent for all conditions. Step behavior and synchronization between the experimenter and participant were compared between conditions. Additionally, event-related spectral perturbations (ERSPs) were time-warped to the gait cycle in order to analyze alpha-mu (7.5-12.5 Hz) and beta (16-32 Hz) rhythms over the whole gait cycle. Step synchronization was significantly higher in the synchrony condition than in the natural condition. Likewise regarding ERSPs, right parietal channel (C4, C6, CP4, CP6) alpha-mu and central channel (C1, Cz, C2) beta power were suppressed from baseline in the walking synchrony condition compared to the natural walking condition. The natural and blocked conditions were not found to be significantly different in behavioral or spectral comparisons. Our results are compatible with the view that intentional synchronization employs systems associated with social interaction as well as the central motor system.
Collapse
Affiliation(s)
- J E M Scanlon
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany.
| | - N S J Jacobsen
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - M C Maack
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - S Debener
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany; Center for Neurosensory Science and Systems, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
27
|
Three cortical scene systems and their development. Trends Cogn Sci 2022; 26:117-127. [PMID: 34857468 PMCID: PMC8770598 DOI: 10.1016/j.tics.2021.11.002] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/14/2021] [Accepted: 11/06/2021] [Indexed: 02/03/2023]
Abstract
Since the discovery of three scene-selective regions in the human brain, a central assumption has been that all three regions directly support navigation. We propose instead that cortical scene processing regions support three distinct computational goals (and one not for navigation at all): (i) The parahippocampal place area supports scene categorization, which involves recognizing the kind of place we are in; (ii) the occipital place area supports visually guided navigation, which involves finding our way through the immediately visible environment, avoiding boundaries and obstacles; and (iii) the retrosplenial complex supports map-based navigation, which involves finding our way from a specific place to some distant, out-of-sight place. We further hypothesize that these systems develop along different timelines, with both navigation systems developing slower than the scene categorization system.
Collapse
|
28
|
Kosakowski HL, Cohen MA, Takahashi A, Keil B, Kanwisher N, Saxe R. Selective responses to faces, scenes, and bodies in the ventral visual pathway of infants. Curr Biol 2022; 32:265-274.e5. [PMID: 34784506 PMCID: PMC8792213 DOI: 10.1016/j.cub.2021.10.064] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/27/2021] [Accepted: 10/28/2021] [Indexed: 01/26/2023]
Abstract
Three of the most robust functional landmarks in the human brain are the selective responses to faces in the fusiform face area (FFA), scenes in the parahippocampal place area (PPA), and bodies in the extrastriate body area (EBA). Are the selective responses of these regions present early in development or do they require many years to develop? Prior evidence leaves this question unresolved. We designed a new 32-channel infant magnetic resonance imaging (MRI) coil and collected high-quality functional MRI (fMRI) data from infants (2-9 months of age) while they viewed stimuli from four conditions-faces, bodies, objects, and scenes. We find that infants have face-, scene-, and body-selective responses in the location of the adult FFA, PPA, and EBA, respectively, powerfully constraining accounts of cortical development.
Collapse
Affiliation(s)
- Heather L Kosakowski
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA.
| | - Michael A Cohen
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA; Department of Psychology and Program in Neuroscience, Amherst College, 220 South Pleasant Street, Amherst, MA, USA
| | - Atsushi Takahashi
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA
| | - Boris Keil
- Institute of Medical Physics and Radiation Protection, Department of Life Science Engineering, Mittelhessen University of Applied Science, Giessen, Germany
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA
| |
Collapse
|
29
|
Gillette KD, Phillips EM, Dilks DD, Berns GS. Using Live and Video Stimuli to Localize Face and Object Processing Regions of the Canine Brain. Animals (Basel) 2022; 12:ani12010108. [PMID: 35011214 PMCID: PMC8749767 DOI: 10.3390/ani12010108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/04/2021] [Accepted: 12/28/2021] [Indexed: 11/16/2022] Open
Abstract
Simple Summary We showed dogs and humans live-action stimuli (actors and objects) and videos of the same stimuli during fMRI to measure the equivalency of live and two-dimensional stimuli in the dog’s brain. We found that video stimuli were effective in defining face and object regions. However, the human fusiform face area and posterior superior temporal sulcus, and the analogous area in the dog brain, appeared to respond preferentially to live stimuli. In object regions, there was not a significantly different response between live and video stimuli. Abstract Previous research to localize face areas in dogs’ brains has generally relied on static images or videos. However, most dogs do not naturally engage with two-dimensional images, raising the question of whether dogs perceive such images as representations of real faces and objects. To measure the equivalency of live and two-dimensional stimuli in the dog’s brain, during functional magnetic resonance imaging (fMRI) we presented dogs and humans with live-action stimuli (actors and objects) as well as videos of the same actors and objects. The dogs (n = 7) and humans (n = 5) were presented with 20 s blocks of faces and objects in random order. In dogs, we found significant areas of increased activation in the putative dog face area, and in humans, we found significant areas of increased activation in the fusiform face area to both live and video stimuli. In both dogs and humans, we found areas of significant activation in the posterior superior temporal sulcus (ectosylvian fissure in dogs) and the lateral occipital complex (entolateral gyrus in dogs) to both live and video stimuli. Of these regions of interest, only the area along the ectosylvian fissure in dogs showed significantly more activation to live faces than to video faces, whereas, in humans, both the fusiform face area and posterior superior temporal sulcus responded significantly more to live conditions than video conditions. However, using the video conditions alone, we were able to localize all regions of interest in both dogs and humans. Therefore, videos can be used to localize these regions of interest, though live conditions may be more salient.
Collapse
|
30
|
One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct 2021; 227:1423-1438. [PMID: 34792643 DOI: 10.1007/s00429-021-02420-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 10/19/2022]
Abstract
Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.
Collapse
|
31
|
Babo-Rebelo M, Puce A, Bullock D, Hugueville L, Pestilli F, Adam C, Lehongre K, Lambrecq V, Dinkelacker V, George N. Visual Information Routes in the Posterior Dorsal and Ventral Face Network Studied with Intracranial Neurophysiology and White Matter Tract Endpoints. Cereb Cortex 2021; 32:342-366. [PMID: 34339495 PMCID: PMC8754371 DOI: 10.1093/cercor/bhab212] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Revised: 05/03/2021] [Accepted: 06/02/2021] [Indexed: 11/13/2022] Open
Abstract
Occipitotemporal regions within the face network process perceptual and socioemotional information, but the dynamics and information flow between different nodes of this network are still debated. Here, we analyzed intracerebral EEG from 11 epileptic patients viewing a stimulus sequence beginning with a neutral face with direct gaze. The gaze could avert or remain direct, while the emotion changed to fearful or happy. N200 field potential peak latencies indicated that face processing begins in inferior occipital cortex and proceeds anteroventrally to fusiform and inferior temporal cortices, in parallel. The superior temporal sulcus responded preferentially to gaze changes with augmented field potential amplitudes for averted versus direct gaze, and large effect sizes relative to other network regions. An overlap analysis of posterior white matter tractography endpoints (from 1066 healthy brains) relative to active intracerebral electrodes in the 11 patients showed likely involvement of both dorsal and ventral posterior white matter pathways. Overall, our data provide new insight into the timing of face and social cue processing in the occipitotemporal brain and anchor the superior temporal cortex in dynamic gaze processing.
Collapse
Affiliation(s)
- M Babo-Rebelo
- Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Centre de Neuroimagerie de Recherche, CENIR, Centre MEG-EEG and STIM Platform, Paris F-75013, France.,Sorbonne Université, Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Experimental Neurosurgery Team, Paris F-75013, France.,Institute of Cognitive Neuroscience, University College London, WC1N 3AZ, London, UK
| | - A Puce
- Department of Psychological and Brain Sciences, Programs in Neuroscience, Cognitive Science, Indiana University, Bloomington, IN 47401, USA
| | - D Bullock
- Department of Psychological and Brain Sciences, Programs in Neuroscience, Cognitive Science, Indiana University, Bloomington, IN 47401, USA
| | - L Hugueville
- Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Centre de Neuroimagerie de Recherche, CENIR, Centre MEG-EEG and STIM Platform, Paris F-75013, France
| | - F Pestilli
- Department of Psychological and Brain Sciences, Programs in Neuroscience, Cognitive Science, Indiana University, Bloomington, IN 47401, USA
| | - C Adam
- Neurophysiology Department, AP-HP, GH Pitié-Salpêtrière-Charles Foix, Paris F-75013, France
| | - K Lehongre
- Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Centre de Neuroimagerie de Recherche, CENIR, Centre MEG-EEG and STIM Platform, Paris F-75013, France
| | - V Lambrecq
- Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Centre de Neuroimagerie de Recherche, CENIR, Centre MEG-EEG and STIM Platform, Paris F-75013, France.,Neurophysiology Department, AP-HP, GH Pitié-Salpêtrière-Charles Foix, Paris F-75013, France
| | - V Dinkelacker
- Department of Neurology, Rothschild Foundation, Paris F-75019, France
| | - N George
- Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Centre de Neuroimagerie de Recherche, CENIR, Centre MEG-EEG and STIM Platform, Paris F-75013, France.,Sorbonne Université, Institut du Cerveau-Paris Brain Institute, ICM, Inserm U 1127, CNRS UMR 7225, Experimental Neurosurgery Team, Paris F-75013, France
| |
Collapse
|
32
|
Pitcher D, Pilkington A, Rauth L, Baker C, Kravitz DJ, Ungerleider LG. The Human Posterior Superior Temporal Sulcus Samples Visual Space Differently From Other Face-Selective Regions. Cereb Cortex 2021; 30:778-785. [PMID: 31264693 DOI: 10.1093/cercor/bhz125] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Revised: 05/17/2019] [Accepted: 05/20/2019] [Indexed: 01/22/2023] Open
Abstract
Neuroimaging studies show that ventral face-selective regions, including the fusiform face area (FFA) and occipital face area (OFA), preferentially respond to faces presented in the contralateral visual field (VF). In the current study we measured the VF response of the face-selective posterior superior temporal sulcus (pSTS). Across 3 functional magnetic resonance imaging experiments, participants viewed face videos presented in different parts of the VF. Consistent with prior results, we observed a contralateral VF bias in bilateral FFA, right OFA (rOFA), and bilateral human motion-selective area MT+. Intriguingly, this contralateral VF bias was absent in the bilateral pSTS. We then delivered transcranial magnetic stimulation (TMS) over right pSTS (rpSTS) and rOFA, while participants matched facial expressions in both hemifields. TMS delivered over the rpSTS disrupted performance in both hemifields, but TMS delivered over the rOFA disrupted performance in the contralateral hemifield only. These converging results demonstrate that the contralateral bias for faces observed in ventral face-selective areas is absent in the pSTS. This difference in VF response is consistent with face processing models proposing 2 functionally distinct pathways. It further suggests that these models should account for differences in interhemispheric connections between the face-selective areas across these 2 pathways.
Collapse
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, Heslington, York YO105DD, UK.,Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Amy Pilkington
- Department of Psychology, University of York, Heslington, York YO105DD, UK
| | - Lionel Rauth
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Chris Baker
- Section on Learning and Plasticity, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| | - Dwight J Kravitz
- Department of Psychology, George Washington University, 2125 G Street NW, Washington, DC 20052, USA
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| |
Collapse
|
33
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
34
|
Pitcher D. Characterizing the Third Visual Pathway for Social Perception. Trends Cogn Sci 2021; 25:550-551. [PMID: 34024729 DOI: 10.1016/j.tics.2021.04.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 04/21/2021] [Indexed: 11/25/2022]
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK.
| |
Collapse
|
35
|
Suzuki S, Kamps FS, Dilks DD, Treadway MT. Two scene navigation systems dissociated by deliberate versus automatic processing. Cortex 2021; 140:199-209. [PMID: 33992908 DOI: 10.1016/j.cortex.2021.03.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 11/25/2020] [Accepted: 03/20/2021] [Indexed: 10/21/2022]
Abstract
Successfully navigating the world requires avoiding boundaries and obstacles in one's immediately-visible environment, as well as finding one's way to distant places in the broader environment. Recent neuroimaging studies suggest that these two navigational processes involve distinct cortical scene processing systems, with the occipital place area (OPA) supporting navigation through the local visual environment, and the retrosplenial complex (RSC) supporting navigation through the broader spatial environment. Here we hypothesized that these systems are distinguished not only by the scene information they represent (i.e., the local visual versus broader spatial environment), but also based on the automaticity of the process they involve, with navigation through the broader environment (including RSC) operating deliberately, and navigation through the local visual environment (including OPA) operating automatically. We tested this hypothesis using fMRI and a maze-navigation paradigm, where participants navigated two maze structures (complex or simple, testing representation of the broader spatial environment) under two conditions (active or passive, testing deliberate versus automatic processing). Consistent with the hypothesis that RSC supports deliberate navigation through the broader environment, RSC responded significantly more to complex than simple mazes during active, but not passive navigation. By contrast, consistent with the hypothesis that OPA supports automatic navigation through the local visual environment, OPA responded strongly even during passive navigation, and did not differentiate between active versus passive conditions. Taken together, these findings suggest the novel hypothesis that navigation through the broader spatial environment is deliberate, whereas navigation through the local visual environment is automatic, shedding new light on the dissociable functions of these systems.
Collapse
Affiliation(s)
- Shosuke Suzuki
- Department of Psychology, Emory University, Atlanta, GA, United States
| | - Frederik S Kamps
- Department of Psychology, Emory University, Atlanta, GA, United States; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA, United States
| | - Michael T Treadway
- Department of Psychology, Emory University, Atlanta, GA, United States; Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta, GA, United States.
| |
Collapse
|
36
|
Pitcher D, Ungerleider LG. Evidence for a Third Visual Pathway Specialized for Social Perception. Trends Cogn Sci 2021; 25:100-110. [PMID: 33334693 PMCID: PMC7811363 DOI: 10.1016/j.tics.2020.11.006] [Citation(s) in RCA: 211] [Impact Index Per Article: 52.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 11/18/2020] [Accepted: 11/18/2020] [Indexed: 11/20/2022]
Abstract
Existing models propose that primate visual cortex is divided into two functionally distinct pathways. The ventral pathway computes the identity of an object; the dorsal pathway computes the location of an object, and the actions related to that object. Despite remaining influential, the two visual pathways model requires revision. Both human and non-human primate studies reveal the existence of a third visual pathway on the lateral brain surface. This third pathway projects from early visual cortex, via motion-selective areas, into the superior temporal sulcus (STS). Studies demonstrating that the STS computes the actions of moving faces and bodies (e.g., expressions, eye-gaze, audio-visual integration, intention, and mood) show that the third visual pathway is specialized for the dynamic aspects of social perception.
Collapse
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, York, YO10 5DD, UK.
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| |
Collapse
|
37
|
Dataset of spiking and LFP activity invasively recorded in the human amygdala during aversive dynamic stimuli. Sci Data 2021; 8:9. [PMID: 33446665 PMCID: PMC7809031 DOI: 10.1038/s41597-020-00790-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 12/01/2020] [Indexed: 11/08/2022] Open
Abstract
We present an electrophysiological dataset collected from the amygdalae of nine participants attending a visual dynamic stimulation of emotional aversive content. The participants were patients affected by epilepsy who underwent preoperative invasive monitoring in the mesial temporal lobe. Participants were presented with dynamic visual sequences of fearful faces (aversive condition), interleaved with sequences of neutral landscapes (neutral condition). The dataset contains the simultaneous recording of intracranial EEG (iEEG) and neuronal spike times and waveforms, and localization information for iEEG electrodes. Participant characteristics and trial information are provided. We technically validated this dataset and provide here the spike sorting quality metrics and the spectra of iEEG signals. This dataset allows the investigation of amygdalar response to dynamic aversive stimuli at multiple spatial scales, from the macroscopic EEG to the neuronal firing in the human brain.
Collapse
|
38
|
Sliwinska MW, Bearpark C, Corkhill J, McPhillips A, Pitcher D. Dissociable pathways for moving and static face perception begin in early visual cortex: Evidence from an acquired prosopagnosic. Cortex 2020; 130:327-339. [DOI: 10.1016/j.cortex.2020.03.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 02/14/2020] [Accepted: 03/13/2020] [Indexed: 11/25/2022]
|
39
|
Rigby SN, Jakobson LS, Pearson PM, Stoesz BM. Alexithymia and the Evaluation of Emotionally Valenced Scenes. Front Psychol 2020; 11:1820. [PMID: 32793083 PMCID: PMC7394003 DOI: 10.3389/fpsyg.2020.01820] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 07/01/2020] [Indexed: 01/15/2023] Open
Abstract
Alexithymia is a personality trait characterized by difficulties identifying and describing feelings (DIF and DDF) and an externally oriented thinking (EOT) style. The primary aim of the present study was to investigate links between alexithymia and the evaluation of emotional scenes. We also investigated whether viewers' evaluations of emotional scenes were better predicted by specific alexithymic traits or by individual differences in sensory processing sensitivity (SPS). Participants (N = 106) completed measures of alexithymia and SPS along with a task requiring speeded judgments of the pleasantness of 120 moderately arousing scenes. We did not replicate laterality effects previously described with the scene perception task. Compared to those with weak alexithymic traits, individuals with moderate-to-strong alexithymic traits were less likely to classify positively valenced scenes as pleasant and were less likely to classify scenes with (vs. without) implied motion (IM) in a way that was consistent with normative scene valence ratings. In addition, regression analyses confirmed that reporting strong EOT and a tendency to be easily overwhelmed by busy sensory environments negatively predicted classification accuracy for positive scenes, and that both DDF and EOT negatively predicted classification accuracy for scenes depicting IM. These findings highlight the importance of accounting for stimulus characteristics and individual differences in specific traits associated with alexithymia and SPS when investigating the processing of emotional stimuli. Learning more about the links between these individual difference variables may have significant clinical implications, given that alexithymia is an important, transdiagnostic risk factor for a wide range of psychopathologies.
Collapse
Affiliation(s)
- Sarah N Rigby
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada
| | - Lorna S Jakobson
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada
| | - Pauline M Pearson
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada.,Department of Psychology, University of Winnipeg, Winnipeg, MB, Canada
| | - Brenda M Stoesz
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada.,Centre for the Advancement of Teaching and Learning, University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
40
|
Sliwinska MW, Elson R, Pitcher D. Dual-site TMS demonstrates causal functional connectivity between the left and right posterior temporal sulci during facial expression recognition. Brain Stimul 2020; 13:1008-1013. [PMID: 32335230 PMCID: PMC7301156 DOI: 10.1016/j.brs.2020.04.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 03/24/2020] [Accepted: 04/17/2020] [Indexed: 01/16/2023] Open
Abstract
Background Neuroimaging studies suggest that facial expression recognition is processed in the bilateral posterior superior temporal sulcus (pSTS). Our recent repetitive transcranial magnetic stimulation (rTMS) study demonstrates that the bilateral pSTS is causally involved in expression recognition, although involvement of the right pSTS is greater than involvement of the left pSTS. Objective /Hypothesis: In this study, we used a dual-site TMS to investigate whether the left pSTS is functionally connected to the right pSTS during expression recognition. We predicted that if this connection exists, simultaneous TMS disruption of the bilateral pSTS would impair expression recognition to a greater extent than unilateral stimulation of the right pSTS alone. Methods Participants attended two TMS sessions. In Session 1, participants performed an expression recognition task while rTMS was delivered to the face-sensitive right pSTS (experimental site), object-sensitive right lateral occipital complex (control site) or no rTMS was delivered (behavioural control). In Session 2, the same experimental design was used, except that continuous theta-burst stimulation (cTBS) was delivered to the left pSTS immediately before behavioural testing commenced. Session order was counter-balanced across participants. Results In Session 1, rTMS to the rpSTS impaired performance accuracy compared to the control conditions. Crucially in Session 2, the size of this impairment effect doubled after cTBS was delivered to the left pSTS. Conclusions Our results provide evidence for a causal functional connection between the left and right pSTS during expression recognition. In addition, this study further demonstrates the utility of the dual-site TMS for investigating causal functional links between brain regions. Dual-site TMS was used to test causal functional connectivity between left and right pSTS during expression recognition. rTMS impaired facial expression recognition when delivered to the right pSTS during a facial expression recognition task. cTBS delivered to the left pSTS prior to the task doubled the impairment effect of rTMS to the right pSTS during the task. The results demonstrate causal functional connectivity between the left and right pSTS during expression recognition. The results also demonstrate the utility of dual-site TMS for investigating interregional causal functional connectivity.
Collapse
Affiliation(s)
| | - Ryan Elson
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - David Pitcher
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| |
Collapse
|
41
|
Fedele T, Tzovara A, Steiger B, Hilfiker P, Grunwald T, Stieglitz L, Jokeit H, Sarnthein J. The relation between neuronal firing, local field potentials and hemodynamic activity in the human amygdala in response to aversive dynamic visual stimuli. Neuroimage 2020; 213:116705. [PMID: 32165266 DOI: 10.1016/j.neuroimage.2020.116705] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 02/11/2020] [Accepted: 03/03/2020] [Indexed: 10/24/2022] Open
Abstract
The amygdala is a central part of networks of brain regions underlying perception and cognition, in particular related to processing of emotionally salient stimuli. Invasive electrophysiological and hemodynamic measurements are commonly used to evaluate functions of the human amygdala, but a comprehensive understanding of their relation is still lacking. Here, we aimed at investigating the link between fast and slow frequency amygdalar oscillations, neuronal firing and hemodynamic responses. To this aim, we recorded intracranial electroencephalography (iEEG), hemodynamic responses and single neuron activity from the amygdala of patients with epilepsy. Patients were presented with dynamic visual sequences of fearful faces (aversive condition), interleaved with sequences of neutral landscapes (neutral condition). Comparing responses to aversive versus neutral stimuli across participants, we observed enhanced high gamma power (HGP, >60 Hz) during the first 2 s of aversive sequence viewing, and reduced delta power (1-4 Hz) lasting up to 18 s. In 5 participants with implanted microwires, neuronal firing rates were enhanced following aversive stimuli, and exhibited positive correlation with HGP and hemodynamic responses. Our results show that high gamma power, neuronal firing and BOLD responses from the human amygdala are co-modulated. Our findings provide, for the first time, a comprehensive investigation of amygdalar responses to aversive stimuli, ranging from single-neuron spikes to local field potentials and hemodynamic responses.
Collapse
Affiliation(s)
- Tommaso Fedele
- National Research University Higher School of Economics, Moscow, Russian Federation.
| | - Athina Tzovara
- Institute for Computer Science, University of Bern, Switzerland
| | | | | | | | - Lennart Stieglitz
- Klinik für Neurochirurgie, UniversitätsSpital Zürich und Universität Zürich, Zurich, Switzerland
| | - Hennric Jokeit
- Schweizerische Epilepsie-Klinik, Zurich, Switzerland; Zentrum für Neurowissenschaften Zürich, Switzerland
| | - Johannes Sarnthein
- Klinik für Neurochirurgie, UniversitätsSpital Zürich und Universität Zürich, Zurich, Switzerland; Zentrum für Neurowissenschaften Zürich, Switzerland.
| |
Collapse
|
42
|
Johnstone LT, Karlsson EM, Carey DP. The validity and reliability of quantifying hemispheric specialisation using fMRI: Evidence from left and right handers on three different cerebral asymmetries. Neuropsychologia 2020; 138:107331. [DOI: 10.1016/j.neuropsychologia.2020.107331] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 12/16/2019] [Accepted: 01/05/2020] [Indexed: 12/21/2022]
|