1
|
Lin Y, Hsu YY, Cheng T, Hsiung PC, Wu CW, Hsieh PJ. Neural representations of perspectival shapes and attentional effects: Evidence from fMRI and MEG. Cortex 2024; 176:129-143. [PMID: 38781910 DOI: 10.1016/j.cortex.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/14/2024] [Accepted: 04/05/2024] [Indexed: 05/25/2024]
Abstract
Does the human brain represent perspectival shapes, i.e., viewpoint-dependent object shapes, especially in relatively higher-level visual areas such as the lateral occipital cortex? What is the temporal profile of the appearance and disappearance of neural representations of perspectival shapes? And how does attention influence these neural representations? To answer these questions, we employed functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and multivariate decoding techniques to investigate spatiotemporal neural representations of perspectival shapes. Participants viewed rotated objects along with the corresponding objective shapes and perspectival shapes (i.e., rotated round, round, and oval) while we measured their brain activities. Our results revealed that shape classifiers trained on the basic shapes (i.e., round and oval) consistently identified neural representations in the lateral occipital cortex corresponding to the perspectival shapes of the viewed objects regardless of attentional manipulations. Additionally, this classification tendency toward the perspectival shapes emerged approximately 200 ms after stimulus presentation. Moreover, attention influenced the spatial dimension as the regions showing the perspectival shape classification tendency propagated from the occipital lobe to the temporal lobe. As for the temporal dimension, attention led to a more robust and enduring classification tendency towards perspectival shapes. In summary, our study outlines a spatiotemporal neural profile for perspectival shapes that suggests a greater degree of perspectival representation than is often acknowledged.
Collapse
Affiliation(s)
- Yi Lin
- Taiwan International Graduate Program in Interdisciplinary Neuroscience, National Cheng Kung University and Academia Sinica, Nankan, Taipei, Taiwan; Research Unit Brain and Cognition, KU Leuven, Leuven, Belgium.
| | - Yung-Yi Hsu
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan
| | - Tony Cheng
- Waseda Institute for Advanced Study, Waseda University, Tokyo, Japan
| | - Pin-Cheng Hsiung
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan
| | - Chen-Wei Wu
- Department of Philosophy, Georgia State University, Atlanta, GA, USA
| | - Po-Jang Hsieh
- Department of Psychology, National Taiwan University, Da'an, Taipei, Taiwan.
| |
Collapse
|
2
|
Thompson LW, Kim B, Rokers B, Rosenberg A. Hierarchical computation of 3D motion across macaque areas MT and FST. Cell Rep 2023; 42:113524. [PMID: 38064337 PMCID: PMC10791528 DOI: 10.1016/j.celrep.2023.113524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/25/2023] [Accepted: 11/15/2023] [Indexed: 12/30/2023] Open
Abstract
Computing behaviorally relevant representations of three-dimensional (3D) motion from two-dimensional (2D) retinal signals is critical for survival. To ascertain where and how the primate visual system performs this computation, we recorded from the macaque middle temporal (MT) area and its downstream target, the fundus of the superior temporal sulcus (area FST). Area MT is a key site of 2D motion processing, but its role in 3D motion processing is controversial. The functions of FST remain highly underexplored. To distinguish representations of 3D motion from those of 2D retinal motion, we contrast responses to multiple motion cues during a motion discrimination task. The results reveal a hierarchical transformation whereby many FST but not MT neurons are selective for 3D motion. Modeling results further show how generalized, cue-invariant representations of 3D motion in FST may be created by selectively integrating the output of 2D motion selective MT neurons.
Collapse
Affiliation(s)
- Lowell W Thompson
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Byounghoon Kim
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI 53705, USA.
| |
Collapse
|
3
|
Lee Y, Seo Y, Lee Y, Lee D. Dimensional emotions are represented by distinct topographical brain networks. Int J Clin Health Psychol 2023; 23:100408. [PMID: 37663040 PMCID: PMC10472247 DOI: 10.1016/j.ijchp.2023.100408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 08/21/2023] [Indexed: 09/05/2023] Open
Abstract
The ability to recognize others' facial emotions has become increasingly important after the COVID-19 pandemic, which causes stressful situations in emotion regulation. Considering the importance of emotion in maintaining a social life, emotion knowledge to perceive and label emotions of oneself and others requires an understanding of affective dimensions, such as emotional valence and emotional arousal. However, limited information is available about whether the behavioral representation of affective dimensions is similar to their neural representation. To explore the relationship between the brain and behavior in the representational geometries of affective dimensions, we constructed a behavioral paradigm in which emotional faces were categorized into geometric spaces along the valence, arousal, and valence and arousal dimensions. Moreover, we compared such representations to neural representations of the faces acquired by functional magnetic resonance imaging. We found that affective dimensions were similarly represented in the behavior and brain. Specifically, behavioral and neural representations of valence were less similar to those of arousal. We also found that valence was represented in the dorsolateral prefrontal cortex, frontal eye fields, precuneus, and early visual cortex, whereas arousal was represented in the cingulate gyrus, middle frontal gyrus, orbitofrontal cortex, fusiform gyrus, and early visual cortex. In conclusion, the current study suggests that dimensional emotions are similarly represented in the behavior and brain and are presented with differential topographical organizations in the brain.
Collapse
Affiliation(s)
| | | | - Youngju Lee
- Cognitive Science Research Group, Korea Brain Research Institute, 61 Cheomdan-ro, Dong-gu, Daegu 41062, Republic of Korea
| | - Dongha Lee
- Cognitive Science Research Group, Korea Brain Research Institute, 61 Cheomdan-ro, Dong-gu, Daegu 41062, Republic of Korea
| |
Collapse
|
4
|
Ren Z, Li J, Xue X, Li X, Yang F, Jiao Z, Gao X. Reconstructing controllable faces from brain activity with hierarchical multiview representations. Neural Netw 2023; 166:487-500. [PMID: 37574622 DOI: 10.1016/j.neunet.2023.07.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 05/21/2023] [Accepted: 07/12/2023] [Indexed: 08/15/2023]
Abstract
Reconstructing visual experience from brain responses measured by functional magnetic resonance imaging (fMRI) is a challenging yet important research topic in brain decoding, especially it has proved more difficult to decode visually similar stimuli, such as faces. Although face attributes are known as the key to face recognition, most existing methods generally ignore how to decode facial attributes more precisely in perceived face reconstruction, which often leads to indistinguishable reconstructed faces. To solve this problem, we propose a novel neural decoding framework called VSPnet (voxel2style2pixel) by establishing hierarchical encoding and decoding networks with disentangled latent representations as media, so that to recover visual stimuli more elaborately. And we design a hierarchical visual encoder (named HVE) to pre-extract features containing both high-level semantic knowledge and low-level visual details from stimuli. The proposed VSPnet consists of two networks: Multi-branch cognitive encoder and style-based image generator. The encoder network is constructed by multiple linear regression branches to map brain signals to the latent space provided by the pre-extracted visual features and obtain representations containing hierarchical information consistent to the corresponding stimuli. We make the generator network inspired by StyleGAN to untangle the complexity of fMRI representations and generate images. And the HVE network is composed of a standard feature pyramid over a ResNet backbone. Extensive experimental results on the latest public datasets have demonstrated the reconstruction accuracy of our proposed method outperforms the state-of-the-art approaches and the identifiability of different reconstructed faces has been greatly improved. In particular, we achieve feature editing for several facial attributes in fMRI domain based on the multiview (i.e., visual stimuli and evoked fMRI) latent representations.
Collapse
Affiliation(s)
- Ziqi Ren
- School of Electronic Engineering, Xidian University, Xi'an 710071, China
| | - Jie Li
- School of Electronic Engineering, Xidian University, Xi'an 710071, China
| | - Xuetong Xue
- School of Electronic Engineering, Xidian University, Xi'an 710071, China
| | - Xin Li
- Group 42 (G42), Abu Dhabi, United Arab Emirates
| | - Fan Yang
- Group 42 (G42), Abu Dhabi, United Arab Emirates
| | - Zhicheng Jiao
- The Warren Alpert Medical School, Brown University, RI, USA; Department of Diagnostic Imaging, Rhode Island Hospital, RI, USA
| | - Xinbo Gao
- School of Electronic Engineering, Xidian University, Xi'an 710071, China.
| |
Collapse
|
5
|
Petro LS, Smith FW, Abbatecola C, Muckli L. The Spatial Precision of Contextual Feedback Signals in Human V1. BIOLOGY 2023; 12:1022. [PMID: 37508451 PMCID: PMC10376409 DOI: 10.3390/biology12071022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/23/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023]
Abstract
Neurons in the primary visual cortex (V1) receive sensory inputs that describe small, local regions of the visual scene and cortical feedback inputs from higher visual areas processing the global scene context. Investigating the spatial precision of this visual contextual modulation will contribute to our understanding of the functional role of cortical feedback inputs in perceptual computations. We used human functional magnetic resonance imaging (fMRI) to test the spatial precision of contextual feedback inputs to V1 during natural scene processing. We measured brain activity patterns in the stimulated regions of V1 and in regions that we blocked from direct feedforward input, receiving information only from non-feedforward (i.e., feedback and lateral) inputs. We measured the spatial precision of contextual feedback signals by generalising brain activity patterns across parametrically spatially displaced versions of identical images using an MVPA cross-classification approach. We found that fMRI activity patterns in cortical feedback signals predicted our scene-specific features in V1 with a precision of approximately 4 degrees. The stimulated regions of V1 carried more precise scene information than non-stimulated regions; however, these regions also contained information patterns that generalised up to 4 degrees. This result shows that contextual signals relating to the global scene are similarly fed back to V1 when feedforward inputs are either present or absent. Our results are in line with contextual feedback signals from extrastriate areas to V1, describing global scene information and contributing to perceptual computations such as the hierarchical representation of feature boundaries within natural scenes.
Collapse
Affiliation(s)
- Lucy S Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK
- Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ, UK
| | - Clement Abbatecola
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK
- Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QB, UK
- Imaging Centre for Excellence (ICE), College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G51 4LB, UK
| |
Collapse
|
6
|
Schuurmans JP, Bennett MA, Petras K, Goffaux V. Backward masking reveals coarse-to-fine dynamics in human V1. Neuroimage 2023; 274:120139. [PMID: 37137434 DOI: 10.1016/j.neuroimage.2023.120139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 05/05/2023] Open
Abstract
Natural images exhibit luminance variations aligned across a broad spectrum of spatial frequencies (SFs). It has been proposed that, at early stages of processing, the coarse signals carried by the low SF (LSF) of the visual input are sent rapidly from primary visual cortex (V1) to ventral, dorsal and frontal regions to form a coarse representation of the input, which is later sent back to V1 to guide the processing of fine-grained high SFs (i.e., HSF). We used functional resonance imaging (fMRI) to investigate the role of human V1 in the coarse-to-fine integration of visual input. We disrupted the processing of the coarse and fine content of full-spectrum human face stimuli via backward masking of selective SF ranges (LSFs: <1.75cpd and HSFs: >1.75cpd) at specific times (50, 83, 100 or 150ms). In line with coarse-to-fine proposals, we found that (1) the selective masking of stimulus LSF disrupted V1 activity in the earliest time window, and progressively decreased in influence, while (2) an opposite trend was observed for the masking of stimulus' HSF. This pattern of activity was found in V1, as well as in ventral (i.e. the Fusiform Face area, FFA), dorsal and orbitofrontal regions. We additionally presented subjects with contrast negated stimuli. While contrast negation significantly reduced response amplitudes in the FFA, as well as coupling between FFA and V1, coarse-to-fine dynamics were not affected by this manipulation. The fact that V1 response dynamics to strictly identical stimulus sets differed depending on the masked scale adds to growing evidence that V1 role goes beyond the early and quasi-passive transmission of visual information to the rest of the brain. It instead indicates that V1 may yield a 'spatially registered common forum' or 'blackboard' that integrates top-down inferences with incoming visual signals through its recurrent interaction with high-level regions located in the inferotemporal, dorsal and frontal regions.
Collapse
Affiliation(s)
- Jolien P Schuurmans
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium.
| | - Matthew A Bennett
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium; Institute of Neuroscience (IONS), UC Louvain, Louvain-la-Neuve, Belgium
| | - Kirsten Petras
- Integrative Neuroscience and Cognition Center, CNRS, Université Paris Cité, Paris, France
| | - Valérie Goffaux
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium; Institute of Neuroscience (IONS), UC Louvain, Louvain-la-Neuve, Belgium; Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
7
|
Brooks JA, Stolier RM, Freeman JB. Computational approaches to the neuroscience of social perception. Soc Cogn Affect Neurosci 2021; 16:827-837. [PMID: 32986115 PMCID: PMC8343569 DOI: 10.1093/scan/nsaa127] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 07/23/2020] [Accepted: 09/09/2020] [Indexed: 11/14/2022] Open
Abstract
Across multiple domains of social perception-including social categorization, emotion perception, impression formation and mentalizing-multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and discuss how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Department of Psychology, New York University, New York, NY, USA
| | - Ryan M Stolier
- Columbia University, 1190 Amsterdam Ave., New York, NY 10027, USA
| | | |
Collapse
|
8
|
Penetrabilidad cognitiva en la percepción visual temprana: Evidencia empírica en humanos. REVISTA IBEROAMERICANA DE PSICOLOGÍA 2021. [DOI: 10.33881/2027-1786.rip.13301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Con base en un trasfondo teórico sobre las concepciones modulares de la mente de Fodor (2001) y Pinker (2005), el objetivo del presente texto es analizar cualitativemente la solidez de la evidencia experimental de una muestra de artículos publicados entre 2002 y 2017 que apoyan la tesis de la penetrabilidad cognitiva en la percepción visual temprana. El estudio se justifica por las implicaciones que pueden tener los resultados de estas investigaciones para las diferentes concepciones sobre arquitectura mental en funciones perceptuales, procesamiento de la información intra e intermodular e isomorfismo entre arquitectura mental y cerebral. La metodología que se utilizó para realizar este estudio implicó establecimiento de la tesis y de los criterios de inclusión de los artículos a revisar, selección final de los artículos más representativos sobre las subáreas seleccionadas, análisis de la calidad metodológica y de los resultados de éstos, identificación de aportes específicos de cada estudio a la tesis planteada e interpretación y síntesis de los hallazgos. De 26 artículos revisados sobre el tema, se reportan y analizan 7, que se consideran representativos de 4 subáreas: penetrabilidad de expectativas, de percepción del color, de rasgos faciales y de reconocimiento de objetos. Se concluye que hay amplia y sólida evidencia convergente (perceptual y neurofisiológica) a favor de los fenómenos penetrativos en la visión temprana, lo cual apoyaría indirectamente la hipótesis de permeabilidad de los módulos mentales de Pinker. Se formulan recomendaciones sobre aspectos por investigar y variables a controlar en experimentos sobre este tema.
Collapse
|
9
|
McCullough S, Emmorey K. Effects of deafness and sign language experience on the human brain: voxel-based and surface-based morphometry. LANGUAGE, COGNITION AND NEUROSCIENCE 2021; 36:422-439. [PMID: 33959670 PMCID: PMC8096161 DOI: 10.1080/23273798.2020.1854793] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We investigated how deafness and sign language experience affect the human brain by comparing neuroanatomical structures across congenitally deaf signers (n = 30), hearing native signers (n = 30), and hearing sign-naïve controls (n = 30). Both voxel-based and surface-based morphometry results revealed deafness-related structural changes in visual cortices (grey matter), right frontal lobe (gyrification), and left Heschl's gyrus (white matter). The comparisons also revealed changes associated with lifelong signing experience: expansions in the surface area within left anterior temporal and left occipital lobes, and a reduction in cortical thickness in the right occipital lobe for deaf and hearing signers. Structural changes within these brain regions may be related to adaptations in the neural networks involved in processing signed language (e.g. visual perception of face and body movements). Hearing native signers also had unique neuroanatomical changes (e.g. reduced gyrification in premotor areas), perhaps due to lifelong experience with both a spoken and a signed language.
Collapse
Affiliation(s)
- Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, CA, USA
| |
Collapse
|
10
|
Keitel A, Gross J, Kayser C. Shared and modality-specific brain regions that mediate auditory and visual word comprehension. eLife 2020; 9:e56972. [PMID: 32831168 PMCID: PMC7470824 DOI: 10.7554/elife.56972] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 08/18/2020] [Indexed: 12/22/2022] Open
Abstract
Visual speech carried by lip movements is an integral part of communication. Yet, it remains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regions. Using multivariate classification of full-brain MEG data, we first probed where the brain represents acoustically and visually conveyed word identities. We then tested where these sensory-driven representations are predictive of participants' trial-wise comprehension. The comprehension-relevant representations of auditory and visual speech converged only in anterior angular and inferior frontal regions and were spatially dissociated from those representations that best reflected the sensory-driven word identity. These results provide a neural explanation for the behavioural dissociation of acoustic and visual speech comprehension and suggest that cerebral representations encoding word identities may be more modality-specific than often upheld.
Collapse
Affiliation(s)
- Anne Keitel
- Psychology, University of DundeeDundeeUnited Kingdom
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of MünsterMünsterGermany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld UniversityBielefeldGermany
| |
Collapse
|
11
|
Vizioli L, De Martino F, Petro LS, Kersten D, Ugurbil K, Yacoub E, Muckli L. Multivoxel Pattern of Blood Oxygen Level Dependent Activity can be sensitive to stimulus specific fine scale responses. Sci Rep 2020; 10:7565. [PMID: 32371891 PMCID: PMC7200825 DOI: 10.1038/s41598-020-64044-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Accepted: 04/08/2020] [Indexed: 12/25/2022] Open
Abstract
At ultra-high field, fMRI voxels can span the sub-millimeter range, allowing the recording of blood oxygenation level dependent (BOLD) responses at the level of fundamental units of neural computation, such as cortical columns and layers. This sub-millimeter resolution, however, is only nominal in nature as a number of factors limit the spatial acuity of functional voxels. Multivoxel Pattern Analysis (MVPA) may provide a means to detect information at finer spatial scales that may otherwise not be visible at the single voxel level due to limitations in sensitivity and specificity. Here, we evaluate the spatial scale of stimuli specific BOLD responses in multivoxel patterns exploited by linear Support Vector Machine, Linear Discriminant Analysis and Naïve Bayesian classifiers across cortical depths in V1. To this end, we artificially misaligned the testing relative to the training portion of the data in increasing spatial steps, then investigated the breakdown of the classifiers’ performances. A one voxel shift led to a significant decrease in decoding accuracy (p < 0.05) across all cortical depths, indicating that stimulus specific responses in a multivoxel pattern of BOLD activity exploited by multivariate decoders can be as precise as the nominal resolution of single voxels (here 0.8 mm isotropic). Our results further indicate that large draining vessels, prominently residing in proximity of the pial surface, do not, in this case, hinder the ability of MVPA to exploit fine scale patterns of BOLD signals. We argue that tailored analytical approaches can help overcoming limitations in high-resolution fMRI and permit studying the mesoscale organization of the human brain with higher sensitivities.
Collapse
Affiliation(s)
- Luca Vizioli
- CMRR, University of Minnesota, Minneapolis, MN, United States.
| | - Federico De Martino
- CMRR, University of Minnesota, Minneapolis, MN, United States.,Maastricht University, Maastricht, Netherlands
| | | | - Daniel Kersten
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | - Kamil Ugurbil
- CMRR, University of Minnesota, Minneapolis, MN, United States
| | - Essa Yacoub
- CMRR, University of Minnesota, Minneapolis, MN, United States
| | - Lars Muckli
- University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
12
|
Guo K, Calver L, Soornack Y, Bourke P. Valence-dependent Disruption in Processing of Facial Expressions of Emotion in Early Visual Cortex—A Transcranial Magnetic Stimulation Study. J Cogn Neurosci 2020; 32:906-916. [DOI: 10.1162/jocn_a_01520] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Our visual inputs are often entangled with affective meanings in natural vision, implying the existence of extensive interaction between visual and emotional processing. However, little is known about the neural mechanism underlying such interaction. This exploratory transcranial magnetic stimulation (TMS) study examined the possible involvement of the early visual cortex (EVC, Area V1/V2/V3) in perceiving facial expressions of different emotional valences. Across three experiments, single-pulse TMS was delivered at different time windows (50–150 msec) after a brief 10-msec onset of face images, and participants reported the visibility and perceived emotional valence of faces. Interestingly, earlier TMS at ∼90 msec only reduced the face visibility irrespective of displayed expressions, but later TMS at ∼120 msec selectively disrupted the recognition of negative facial expressions, indicating the involvement of EVC in the processing of negative expressions at a later time window, possibly beyond the initial processing of fed-forward facial structure information. The observed TMS effect was further modulated by individuals' anxiety level. TMS at ∼110–120 msec disrupted the recognition of anger significantly more for those scoring relatively low in trait anxiety than the high scorers, suggesting that cognitive bias influences the processing of facial expressions in EVC. Taken together, it seems that EVC is involved in structural encoding of (at least) negative facial emotional valence, such as fear and anger, possibly under modulation from higher cortical areas.
Collapse
|
13
|
The effects of Botulinum toxin on the detection of gradual changes in facial emotion. Sci Rep 2019; 9:11734. [PMID: 31409880 PMCID: PMC6692314 DOI: 10.1038/s41598-019-48275-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Accepted: 08/01/2019] [Indexed: 12/24/2022] Open
Abstract
When we feel sad or depressed, our face invariably “drops”. Conversely, when we try to cheer someone up, we might tell them “keep your smile up”, so presupposing that modifying the configuration of their facial muscles will enhance their mood. A crucial assumption that underpins this hypothesis is that mental states are shaped by information originating from the peripheral neuromotor system — a view operationalised as the Facial Feedback Hypothesis. We used botulinum toxin (BoNT-A) injected over the frown area to temporarily paralyse muscles necessary to express anger. Using a pre-post treatment design, we presented participants with gradually changing videos of a face morphing from neutral to full-blown expressions of either anger or happiness and asked them to press a button as soon as they had detected any change in the display. Results indicate that while all participants (control and BoNT-A) improved their reaction times from pre-test to post-test, the BoNT-A group did not when detecting anger in the post-test. We surmise that frown paralysis disadvantaged participants in their ability to improve the detection of anger. Our finding suggests that facial feedback causally affects perceptual awareness of changes in emotion, as well as people’s ability to use perceptual information to learn.
Collapse
|
14
|
Shehzad Z, McCarthy G. Perceptual and Semantic Phases of Face Identification Processing: A Multivariate Electroencephalography Study. J Cogn Neurosci 2019; 31:1827-1839. [PMID: 31368824 DOI: 10.1162/jocn_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Rapid identification of a familiar face requires an image-invariant representation of person identity. A varying sample of familiar faces is necessary to disentangle image-level from person-level processing. We investigated the time course of face identity processing using a multivariate electroencephalography analysis. Participants saw ambient exemplars of celebrity faces that differed in pose, lighting, hairstyle, and so forth. A name prime preceded a face on half of the trials to preactivate person-specific information, whereas a neutral prime was used on the remaining half. This manipulation helped dissociate perceptual- and semantic-based identification. Two time intervals within the post-face onset electroencephalography epoch were sensitive to person identity. The early perceptual phase spanned 110-228 msec and was not modulated by the name prime. The late semantic phase spanned 252-1000 msec and was sensitive to person knowledge activated by the name prime. Within this late phase, the identity response occurred earlier in time (300-600 msec) for the name prime with a scalp topography similar to the FN400 ERP. This may reflect a matching of the person primed in memory with the face on the screen. Following a neutral prime, the identity response occurred later in time (500-800 msec) with a scalp topography similar to the P600f ERP. This may reflect activation of semantic knowledge associated with the identity. Our results suggest that processing of identity begins early (110 msec), with some tolerance to image-level variations, and then progresses in stages sensitive to perceptual and then to semantic features.
Collapse
|
15
|
The neural representation of facial-emotion categories reflects conceptual structure. Proc Natl Acad Sci U S A 2019; 116:15861-15870. [PMID: 31332015 DOI: 10.1073/pnas.1816408116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Humans reliably categorize configurations of facial actions into specific emotion categories, leading some to argue that this process is invariant between individuals and cultures. However, growing behavioral evidence suggests that factors such as emotion-concept knowledge may shape the way emotions are visually perceived, leading to variability-rather than universality-in facial-emotion perception. Understanding variability in emotion perception is only emerging, and the neural basis of any impact from the structure of emotion-concept knowledge remains unknown. In a neuroimaging study, we used a representational similarity analysis (RSA) approach to measure the correspondence between the conceptual, perceptual, and neural representational structures of the six emotion categories Anger, Disgust, Fear, Happiness, Sadness, and Surprise. We found that subjects exhibited individual differences in their conceptual structure of emotions, which predicted their own unique perceptual structure. When viewing faces, the representational structure of multivoxel patterns in the right fusiform gyrus was significantly predicted by a subject's unique conceptual structure, even when controlling for potential physical similarity in the faces themselves. Finally, cross-cultural differences in emotion perception were also observed, which could be explained by individual differences in conceptual structure. Our results suggest that the representational structure of emotion expressions in visual face-processing regions may be shaped by idiosyncratic conceptual understanding of emotion categories.
Collapse
|
16
|
VanRullen R, Reddy L. Reconstructing faces from fMRI patterns using deep generative neural networks. Commun Biol 2019; 2:193. [PMID: 31123717 PMCID: PMC6529435 DOI: 10.1038/s42003-019-0438-y] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 04/26/2019] [Indexed: 12/04/2022] Open
Abstract
Although distinct categories are reliably decoded from fMRI brain responses, it has proved more difficult to distinguish visually similar inputs, such as different faces. Here, we apply a recently developed deep learning system to reconstruct face images from human fMRI. We trained a variational auto-encoder (VAE) neural network using a GAN (Generative Adversarial Network) unsupervised procedure over a large data set of celebrity faces. The auto-encoder latent space provides a meaningful, topologically organized 1024-dimensional description of each image. We then presented several thousand faces to human subjects, and learned a simple linear mapping between the multi-voxel fMRI activation patterns and the 1024 latent dimensions. Finally, we applied this mapping to novel test images, translating fMRI patterns into VAE latent codes, and codes into face reconstructions. The system not only performed robust pairwise decoding (>95% correct), but also accurate gender classification, and even decoded which face was imagined, rather than seen.
Collapse
Affiliation(s)
- Rufin VanRullen
- CerCo, CNRS, UMR 5549, Université de Toulouse, Toulouse, 31052 France
| | - Leila Reddy
- CerCo, CNRS, UMR 5549, Université de Toulouse, Toulouse, 31052 France
| |
Collapse
|
17
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
18
|
Greening SG, Mitchell DG, Smith FW. Spatially generalizable representations of facial expressions: Decoding across partial face samples. Cortex 2018; 101:31-43. [DOI: 10.1016/j.cortex.2017.11.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 11/02/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
|
19
|
Dobs K, Schultz J, Bülthoff I, Gardner JL. Task-dependent enhancement of facial expression and identity representations in human cortex. Neuroimage 2018; 172:689-702. [PMID: 29432802 DOI: 10.1016/j.neuroimage.2018.02.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 02/02/2018] [Accepted: 02/06/2018] [Indexed: 11/24/2022] Open
Abstract
What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.
Collapse
Affiliation(s)
- Katharina Dobs
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, USA.
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Division of Medical Psychology and Department of Psychiatry, University of Bonn, Sigmund Freud Str. 25, 53105 Bonn, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany
| | - Justin L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Psychology, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
| |
Collapse
|
20
|
Wackerhagen C, Wüstenberg T, Mohnke S, Erk S, Veer IM, Kruschwitz JD, Garbusow M, Romund L, Otto K, Schweiger JI, Tost H, Heinz A, Meyer-Lindenberg A, Walter H, Romanczuk-Seiferth N. Influence of Familial Risk for Depression on Cortico-Limbic Connectivity During Implicit Emotional Processing. Neuropsychopharmacology 2017; 42:1729-1738. [PMID: 28294134 PMCID: PMC5518910 DOI: 10.1038/npp.2017.59] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 02/20/2017] [Accepted: 03/07/2017] [Indexed: 12/13/2022]
Abstract
Imbalances in cortico-limbic activity and functional connectivity (FC) supposedly underlie biased emotional processing and present putative intermediate phenotypes (IPs) for major depressive disorder (MDD). To prove the validity of these IPs, we assessed them in familial risk. In 70 healthy first-degree relatives of MDD patients and 70 controls, brain activity and seed-based amygdala FC were assessed during an implicit emotional processing task for fMRI containing angry and fearful faces. Using the generalized psychophysiological interaction approach, amygdala FC was assessed (a) across conditions to provide comparable data to previous studies and (b) compared between conditions to elucidate its implications for emotional processing. Associations of amygdala FC with self-reported negative affect were explored post hoc. Groups did not differ in brain activation. In relatives, amygdala FC across conditions was decreased with superior and medial frontal gyrus (SFG, MFG) and increased with subgenual and perigenual anterior cingulate cortex (sgACC, pgACC). NA was inversely correlated with amygdala FC with MFG, pgACC and their interaction in relatives. Relatives showed aberrant condition-dependent modulations of amygdala FC with visual cortex, thalamus and orbitofrontal cortex. Our results do not support imbalanced cortico-limbic activity as IP for MDD. Diminished amygdala-dorsomedial prefrontal FC in relatives might indicate insufficient regulatory capacity, which appears to be compensated by ventromedial prefrontal regions. Differential task-dependent modulations of amygdala FC are discussed as a stronger involvement of automatic instead of voluntary emotional processing pathways. Reliability and etiological implications of these results should be investigated in future studies including longitudinal designs and patient-risk-control comparisons.
Collapse
Affiliation(s)
- Carolin Wackerhagen
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Torsten Wüstenberg
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Sebastian Mohnke
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Susanne Erk
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Ilya M Veer
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Johann D Kruschwitz
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Maria Garbusow
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Lydia Romund
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Kristina Otto
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Mannheim, Germany
| | - Janina I Schweiger
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Mannheim, Germany
| | - Heike Tost
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Mannheim, Germany
| | - Andreas Heinz
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Andreas Meyer-Lindenberg
- Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Mannheim, Germany
| | - Henrik Walter
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Nina Romanczuk-Seiferth
- Division of Mind and Brain Research, Department of Psychiatry and Psychotherapy, Charité—Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| |
Collapse
|
21
|
The time course of individual face recognition: A pattern analysis of ERP signals. Neuroimage 2016; 132:469-476. [DOI: 10.1016/j.neuroimage.2016.03.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Revised: 02/05/2016] [Accepted: 03/02/2016] [Indexed: 02/08/2023] Open
|
22
|
Prete G, Capotosto P, Zappasodi F, Laeng B, Tommasi L. The cerebral correlates of subliminal emotions: an eleoencephalographic study with emotional hybrid faces. Eur J Neurosci 2015; 42:2952-62. [PMID: 26370468 DOI: 10.1111/ejn.13078] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2014] [Revised: 09/07/2015] [Accepted: 09/08/2015] [Indexed: 11/30/2022]
Abstract
In a high-resolution electroencephalographic study, participants evaluated the friendliness level of upright and inverted 'hybrid faces', i.e. facial photos containing a subliminal emotional core in the low spatial frequencies (< 6 cycles/image), superimposed on a neutral expression in the rest of the spatial frequencies. Upright happy and angry faces were judged as more friendly or less friendly than neutral faces, respectively. We observed the time course of cerebral correlates of these stimuli with event-related potentials (ERPs), confirming that hybrid faces elicited the posterior emotion-related and face-related components (P1, N170 and P2), previously shown to be engaged by non-subliminal emotional stimuli. In addition, these components were stronger in the right hemisphere and were both enhanced and delayed by face inversion. A frontal positivity (210-300 ms) was stronger for emotional than for neutral faces, and for upright than for inverted faces. Hence, hybrid faces represent an original approach in the study of subliminal emotions, which appears promising for investigating their electrophysiological correlates.
Collapse
Affiliation(s)
- Giulia Prete
- Department of Neuroscience, Imaging and Clinical Science, 'G. d'Annunzio' University of Chieti-Pescara, Blocco A, Via dei Vestini 31, I-66013, Chieti, Italy
| | - Paolo Capotosto
- Department of Neuroscience, Imaging and Clinical Science, 'G. d'Annunzio' University of Chieti-Pescara, Blocco A, Via dei Vestini 31, I-66013, Chieti, Italy.,Institute for Advanced Biomedical Technologies, 'G. d'Annunzio' University of Chieti-Pescara, Chieti, Italy
| | - Filippo Zappasodi
- Department of Neuroscience, Imaging and Clinical Science, 'G. d'Annunzio' University of Chieti-Pescara, Blocco A, Via dei Vestini 31, I-66013, Chieti, Italy.,Institute for Advanced Biomedical Technologies, 'G. d'Annunzio' University of Chieti-Pescara, Chieti, Italy
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Luca Tommasi
- Department of Psychological Science, Health and Territory, 'G. d'Annunzio' University of Chieti-Pescara, Chieti, Italy
| |
Collapse
|
23
|
Wegrzyn M, Riehle M, Labudda K, Woermann F, Baumgartner F, Pollmann S, Bien CG, Kissler J. Investigating the brain basis of facial expression perception using multi-voxel pattern analysis. Cortex 2015; 69:131-40. [PMID: 26046623 DOI: 10.1016/j.cortex.2015.05.003] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2014] [Revised: 11/06/2014] [Accepted: 05/01/2015] [Indexed: 10/23/2022]
Abstract
Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised.
Collapse
Affiliation(s)
- Martin Wegrzyn
- Department of Psychology, University of Bielefeld, Bielefeld, Germany; Center of Excellence Cognitive Interaction Technology (CITEC), University of Bielefeld, Bielefeld, Germany.
| | - Marcel Riehle
- Department of Psychology, University of Bielefeld, Bielefeld, Germany
| | - Kirsten Labudda
- Epilepsy Centre Bethel, Krankenhaus Mara, Bielefeld, Germany
| | | | - Florian Baumgartner
- Department of Psychology, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Stefan Pollmann
- Department of Psychology, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | | | - Johanna Kissler
- Department of Psychology, University of Bielefeld, Bielefeld, Germany; Center of Excellence Cognitive Interaction Technology (CITEC), University of Bielefeld, Bielefeld, Germany
| |
Collapse
|
24
|
Petro LS, Vizioli L, Muckli L. Contributions of cortical feedback to sensory processing in primary visual cortex. Front Psychol 2014; 5:1223. [PMID: 25414677 PMCID: PMC4222340 DOI: 10.3389/fpsyg.2014.01223] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 10/09/2014] [Indexed: 11/13/2022] Open
Abstract
Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states.
Collapse
Affiliation(s)
- Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | - Luca Vizioli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| |
Collapse
|
25
|
Visconti di Oleggio Castello M, Guntupalli JS, Yang H, Gobbini MI. Facilitated detection of social cues conveyed by familiar faces. Front Hum Neurosci 2014; 8:678. [PMID: 25228873 PMCID: PMC4151039 DOI: 10.3389/fnhum.2014.00678] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2014] [Accepted: 08/13/2014] [Indexed: 11/25/2022] Open
Abstract
Recognition of the identity of familiar faces in conditions with poor visibility or over large changes in head angle, lighting and partial occlusion is far more accurate than recognition of unfamiliar faces in similar conditions. Here we used a visual search paradigm to test if one class of social cues transmitted by faces—direction of another's attention as conveyed by gaze direction and head orientation—is perceived more rapidly in personally familiar faces than in unfamiliar faces. We found a strong effect of familiarity on the detection of these social cues, suggesting that the times to process these signals in familiar faces are markedly faster than the corresponding processing times for unfamiliar faces. In the light of these new data, hypotheses on the organization of the visual system for processing faces are formulated and discussed.
Collapse
Affiliation(s)
| | - J Swaroop Guntupalli
- Department of Psychological and Brain Sciences, Dartmouth College Hanover, NH, USA
| | - Hua Yang
- Department of Psychological and Brain Sciences, Dartmouth College Hanover, NH, USA
| | - M Ida Gobbini
- Department of Psychological and Brain Sciences, Dartmouth College Hanover, NH, USA ; Department of Medicina Specialistica, Diagnostica e Sperimentale (DIMES), Medical School University of Bologna, Italy
| |
Collapse
|
26
|
Harry B, Williams MA, Davis C, Kim J. Emotional expressions evoke a differential response in the fusiform face area. Front Hum Neurosci 2013; 7:692. [PMID: 24194707 PMCID: PMC3809557 DOI: 10.3389/fnhum.2013.00692] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2012] [Accepted: 09/30/2013] [Indexed: 01/01/2023] Open
Abstract
It is widely assumed that the fusiform face area (FFA), a brain region specialized for face perception, is not involved in processing emotional expressions. This assumption is based on the proposition that the FFA is involved in face identification and only processes features that are invariant across changes due to head movements, speaking and expressing emotions. The present study tested this proposition by examining whether the response in the human FFA varies across emotional expressions with functional magnetic resonance imaging and brain decoding analysis techniques (n = 11). A one vs. all classification analysis showed that most emotional expressions that participants perceived could be reliably predicted from the neural pattern of activity in left and the right FFA, suggesting that the perception of different emotional expressions recruit partially non-overlapping neural mechanisms. In addition, emotional expressions could also be decoded from the pattern of activity in the early visual cortex (EVC), indicating that retinotopic cortex also shows a differential response to emotional expressions. These results cast doubt on the idea that the FFA is involved in expression invariant face processing, and instead indicate that emotional expressions evoke partially de-correlated signals throughout occipital and posterior temporal cortex.
Collapse
|
27
|
Cortical activation deficits during facial emotion processing in youth at high risk for the development of substance use disorders. Drug Alcohol Depend 2013; 131:230-7. [PMID: 23768841 PMCID: PMC3740548 DOI: 10.1016/j.drugalcdep.2013.05.015] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Revised: 04/15/2013] [Accepted: 05/13/2013] [Indexed: 11/23/2022]
Abstract
BACKGROUND Recent longitudinal studies demonstrate that addiction risk may be influenced by a cognitive, affective and behavioral phenotype that emerges during childhood. Relatively little research has focused on the affective or emotional risk components of this high-risk phenotype, including the relevant neurobiology. METHODS Non-substance abusing youth (N=19; mean age=12.2) with externalizing psychopathology and paternal history of a substance use disorder and demographically matched healthy comparisons (N=18; mean age=11.9) were tested on a facial emotion matching task during functional MRI. This task involved matching faces by emotions (angry, anxious) or matching shape orientation. RESULTS High-risk youth exhibited increased medial prefrontal, precuneus and occipital cortex activation compared to the healthy comparison group during the face matching condition, relative to the control shape condition. The occipital activation correlated positively with parent-rated emotion regulation impairments in the high-risk group. CONCLUSIONS These findings suggest a preexisting abnormality in cortical activation in response to facial emotion matching in youth at high risk for the development of problem drug or alcohol use. These cortical deficits may underlie impaired affective processing and regulation, which in turn may contribute to escalating drug use in adolescence.
Collapse
|
28
|
Network interactions: non-geniculate input to V1. Curr Opin Neurobiol 2013; 23:195-201. [DOI: 10.1016/j.conb.2013.01.020] [Citation(s) in RCA: 97] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2012] [Revised: 01/15/2013] [Accepted: 01/15/2013] [Indexed: 11/22/2022]
|