1
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
2
|
Schwartz E, Alreja A, Richardson RM, Ghuman A, Anzellotti S. Intracranial Electroencephalography and Deep Neural Networks Reveal Shared Substrates for Representations of Face Identity and Expressions. J Neurosci 2023; 43:4291-4303. [PMID: 37142430 PMCID: PMC10255163 DOI: 10.1523/jneurosci.1277-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 03/25/2023] [Accepted: 04/17/2023] [Indexed: 05/06/2023] Open
Abstract
According to a classical view of face perception (Bruce and Young, 1986; Haxby et al., 2000), face identity and facial expression recognition are performed by separate neural substrates (ventral and lateral temporal face-selective regions, respectively). However, recent studies challenge this view, showing that expression valence can also be decoded from ventral regions (Skerry and Saxe, 2014; Li et al., 2019), and identity from lateral regions (Anzellotti and Caramazza, 2017). These findings could be reconciled with the classical view if regions specialized for one task (either identity or expression) contain a small amount of information for the other task (that enables above-chance decoding). In this case, we would expect representations in lateral regions to be more similar to representations in deep convolutional neural networks (DCNNs) trained to recognize facial expression than to representations in DCNNs trained to recognize face identity (the converse should hold for ventral regions). We tested this hypothesis by analyzing neural responses to faces varying in identity and expression. Representational dissimilarity matrices (RDMs) computed from human intracranial recordings (n = 11 adults; 7 females) were compared with RDMs from DCNNs trained to label either identity or expression. We found that RDMs from DCNNs trained to recognize identity correlated with intracranial recordings more strongly in all regions tested-even in regions classically hypothesized to be specialized for expression. These results deviate from the classical view, suggesting that face-selective ventral and lateral regions contribute to the representation of both identity and expression.SIGNIFICANCE STATEMENT Previous work proposed that separate brain regions are specialized for the recognition of face identity and facial expression. However, identity and expression recognition mechanisms might share common brain regions instead. We tested these alternatives using deep neural networks and intracranial recordings from face-selective brain regions. Deep neural networks trained to recognize identity and networks trained to recognize expression learned representations that correlate with neural recordings. Identity-trained representations correlated with intracranial recordings more strongly in all regions tested, including regions hypothesized to be expression specialized in the classical hypothesis. These findings support the view that identity and expression recognition rely on common brain regions. This discovery may require reevaluation of the roles that the ventral and lateral neural pathways play in processing socially relevant stimuli.
Collapse
Affiliation(s)
- Emily Schwartz
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, Massachusetts 02467
| | - Arish Alreja
- Center for the Neural Basis of Cognition, Carnegie Mellon University/University of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Department of Neurological Surgery, University of Pittsburgh Medical Center Presbyterian, Pittsburgh, Pennsylvania 15213
| | - R Mark Richardson
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, Massachusetts 02115
| | - Avniel Ghuman
- Center for the Neural Basis of Cognition, Carnegie Mellon University/University of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Neurological Surgery, University of Pittsburgh Medical Center Presbyterian, Pittsburgh, Pennsylvania 15213
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Stefano Anzellotti
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, Massachusetts 02467
| |
Collapse
|
3
|
Schwartz E, O’Nell K, Saxe R, Anzellotti S. Challenging the Classical View: Recognition of Identity and Expression as Integrated Processes. Brain Sci 2023; 13:296. [PMID: 36831839 PMCID: PMC9954353 DOI: 10.3390/brainsci13020296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/01/2023] [Accepted: 02/02/2023] [Indexed: 02/12/2023] Open
Abstract
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.
Collapse
Affiliation(s)
- Emily Schwartz
- Department of Psychology and Neuroscience, Boston College, Boston, MA 02467, USA
| | - Kathryn O’Nell
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Stefano Anzellotti
- Department of Psychology and Neuroscience, Boston College, Boston, MA 02467, USA
| |
Collapse
|
4
|
Soto FA, Narasiwodeyar S. Improving the validity of neuroimaging decoding tests of invariant and configural neural representation. PLoS Comput Biol 2023; 19:e1010819. [PMID: 36689555 PMCID: PMC9894561 DOI: 10.1371/journal.pcbi.1010819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 02/02/2023] [Accepted: 12/15/2022] [Indexed: 01/24/2023] Open
Abstract
Many research questions in sensory neuroscience involve determining whether the neural representation of a stimulus property is invariant or specific to a particular stimulus context (e.g., Is object representation invariant to translation? Is the representation of a face feature specific to the context of other face features?). Between these two extremes, representations may also be context-tolerant or context-sensitive. Most neuroimaging studies have used operational tests in which a target property is inferred from a significant test against the null hypothesis of the opposite property. For example, the popular cross-classification test concludes that representations are invariant or tolerant when the null hypothesis of specificity is rejected. A recently developed neurocomputational theory suggests two insights regarding such tests. First, tests against the null of context-specificity, and for the alternative of context-invariance, are prone to false positives due to the way in which the underlying neural representations are transformed into indirect measurements in neuroimaging studies. Second, jointly performing tests against the nulls of invariance and specificity allows one to reach more precise and valid conclusions about the underlying representations, particularly when the null of invariance is tested using the fine-grained information from classifier decision variables rather than only accuracies (i.e., using the decoding separability test). Here, we provide empirical and computational evidence supporting both of these theoretical insights. In our empirical study, we use encoding of orientation and spatial position in primary visual cortex as a case study, as previous research has established that these properties are encoded in a context-sensitive way. Using fMRI decoding, we show that the cross-classification test produces false-positive conclusions of invariance, but that more valid conclusions can be reached by jointly performing tests against the null of invariance. The results of two simulations further support both of these conclusions. We conclude that more valid inferences about invariance or specificity of neural representations can be reached by jointly testing against both hypotheses, and using neurocomputational theory to guide the interpretation of results.
Collapse
Affiliation(s)
- Fabian A. Soto
- Department of Psychology, Florida International University, Miami, Florida, United States of America
- * E-mail:
| | - Sanjay Narasiwodeyar
- Department of Psychology, Florida International University, Miami, Florida, United States of America
| |
Collapse
|
5
|
Poskanzer C, Anzellotti S. Functional coordinates: Modeling interactions between brain regions as points in a function space. Netw Neurosci 2022; 6:1296-1315. [PMID: 38800459 PMCID: PMC11117108 DOI: 10.1162/netn_a_00264] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 06/22/2022] [Indexed: 05/29/2024] Open
Abstract
Here, we propose a novel technique to investigate nonlinear interactions between brain regions that captures both the strength and type of the functional relationship. Inspired by the field of functional analysis, we propose that the relationship between activity in separate brain areas can be viewed as a point in function space, identified by coordinates along an infinite set of basis functions. Using Hermite polynomials as bases, we estimate a subset of these values that serve as "functional coordinates," characterizing the interaction between BOLD activity across brain areas. We provide a proof of the convergence of the estimates in the limit, and we validate the method with simulations in which the ground truth is known, additionally showing that functional coordinates detect statistical dependence even when correlations ("functional connectivity") approach zero. We then use functional coordinates to examine neural interactions with a chosen seed region: the fusiform face area (FFA). Using k-means clustering across each voxel's functional coordinates, we illustrate that adding nonlinear basis functions allows for the discrimination of interregional interactions that are otherwise grouped together when using only linear dependence. Finally, we show that regions in V5 and medial occipital and temporal lobes exhibit significant nonlinear interactions with the FFA.
Collapse
Affiliation(s)
- Craig Poskanzer
- Department of Psychology, Columbia University, New York City, NY, USA
- Department of Psychology and Neuroscience, Boston College, Boston, MA, USA
| | - Stefano Anzellotti
- Department of Psychology and Neuroscience, Boston College, Boston, MA, USA
| |
Collapse
|
6
|
de la Zerda SH, Netser S, Magalnik H, Briller M, Marzan D, Glatt S, Abergel Y, Wagner S. Social recognition in laboratory mice requires integration of behaviorally-induced somatosensory, auditory and olfactory cues. Psychoneuroendocrinology 2022; 143:105859. [PMID: 35816892 DOI: 10.1016/j.psyneuen.2022.105859] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 06/30/2022] [Accepted: 06/30/2022] [Indexed: 10/17/2022]
Abstract
In humans, discrimination between individuals, also termed social recognition, can rely on a single sensory modality, such as vision. By analogy, social recognition in rodents is thought to be based upon olfaction. Here, we hypothesized that social recognition in rodents relies upon integration of olfactory, auditory and somatosensory cues, hence requiring active behavior of social stimuli. Using distinct social recognition tests, we demonstrated that adult male mice do not exhibit recognition of familiar stimuli or learn the identity of novel stimuli that are inactive due to anesthesia. We further revealed that impairing the olfactory, somatosensory or auditory systems prevents behavioral recognition of familiar stimuli. Finally, we found that familiar and novel stimuli generate distinct movement patterns during social discrimination and that subjects react differentially to the movement of these stimuli. Thus, unlike what occurs in humans, social recognition in mice relies on integration of information from several sensory modalities.
Collapse
Affiliation(s)
- Shani Haskal de la Zerda
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Shai Netser
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Hen Magalnik
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Mayan Briller
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Dan Marzan
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Sigal Glatt
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Yasmin Abergel
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, Integrated Brain and Behavior Research Center (IBBR), University of Haifa, Haifa, Israel.
| |
Collapse
|
7
|
Merchant JS, Alkire D, Redcay E. Neural similarity between mentalizing and live social interaction during the transition to adolescence. Hum Brain Mapp 2022; 43:4074-4090. [PMID: 35545954 PMCID: PMC9374881 DOI: 10.1002/hbm.25903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 03/14/2022] [Accepted: 04/19/2022] [Indexed: 12/03/2022] Open
Abstract
Social interactions are essential for human development, yet little neuroimaging research has examined their underlying neurocognitive mechanisms using socially interactive paradigms during childhood and adolescence. Recent neuroimaging research has revealed activity in the mentalizing network when children engage with a live social partner, even when mentalizing is not required. While this finding suggests that social‐interactive contexts may spontaneously engage mentalizing, it is not a direct test of how similarly the brain responds to these two contexts. The current study used representational similarity analysis on data from 8‐ to 14‐year‐olds who made mental and nonmental judgments about an abstract character and a live interaction partner during fMRI. A within‐subject, 2 (Mental/Nonmental) × 2 (Peer/Character) design enabled us to examine response pattern similarity between conditions, and estimate fit to three conceptual models of how the two contexts relate: (1) social interaction and mentalizing about an abstract character are represented similarly; (2) interactive peers and abstract characters are represented differently regardless of the evaluation type; and (3) mental and nonmental states are represented dissimilarly regardless of target. We found that the temporal poles represent mentalizing and peer interactions similarly (Model 1), suggesting a neurocognitive link between the two in these regions. Much of the rest of the social brain exhibits different representations of interactive peers and abstract characters (Model 2). Our findings highlight the importance of studying social‐cognitive processes using interactive approaches, and the utility of pattern‐based analyses for understanding how social‐cognitive processes relate to each other.
Collapse
Affiliation(s)
- Junaid S Merchant
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA.,Department of Psychology, University of Maryland, College Park, Maryland, USA
| | - Diana Alkire
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA.,Department of Psychology, University of Maryland, College Park, Maryland, USA
| | - Elizabeth Redcay
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA.,Department of Psychology, University of Maryland, College Park, Maryland, USA
| |
Collapse
|
8
|
Bruera A, Poesio M. Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics. Front Artif Intell 2022; 5:796793. [PMID: 35280237 PMCID: PMC8905499 DOI: 10.3389/frai.2022.796793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/25/2022] [Indexed: 11/23/2022] Open
Abstract
Semantic knowledge about individual entities (i.e., the referents of proper names such as Jacinta Ardern) is fine-grained, episodic, and strongly social in nature, when compared with knowledge about generic entities (the referents of common nouns such as politician). We investigate the semantic representations of individual entities in the brain; and for the first time we approach this question using both neural data, in the form of newly-acquired EEG data, and distributional models of word meaning, employing them to isolate semantic information regarding individual entities in the brain. We ran two sets of analyses. The first set of analyses is only concerned with the evoked responses to individual entities and their categories. We find that it is possible to classify them according to both their coarse and their fine-grained category at appropriate timepoints, but that it is hard to map representational information learned from individuals to their categories. In the second set of analyses, we learn to decode from evoked responses to distributional word vectors. These results indicate that such a mapping can be learnt successfully: this counts not only as a demonstration that representations of individuals can be discriminated in EEG responses, but also as a first brain-based validation of distributional semantic models as representations of individual entities. Finally, in-depth analyses of the decoder performance provide additional evidence that the referents of proper names and categories have little in common when it comes to their representation in the brain.
Collapse
Affiliation(s)
- Andrea Bruera
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | | |
Collapse
|
9
|
Semenza C. Proper names and personal identity. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:287-302. [PMID: 35964978 DOI: 10.1016/b978-0-12-823493-8.00008-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The present chapter reviews the body of knowledge acquired so far about the role of the temporal lobe in representing and processing proper names and individual identity information. This body of knowledge has been collected with the contribution of several methodologies, including neuroimaging, electrophysiological techniques, and, critically, clinical observations. All this evidence converges in showing that proper names and related information are processed in at least partially independent neural networks mainly placed in the anterior areas of the left temporal lobe. A description of the properties distinguishing proper names from common names is provided. These properties, it will be claimed, made a different anatomical organization necessary and, possibly, determined the evolution of the brain to support this advantageous distinction in meeting environmental demands.
Collapse
Affiliation(s)
- Carlo Semenza
- Department of Neuroscience, Padova Neuroscience Center, University of Padova, Padova, Italy.
| |
Collapse
|
10
|
Spatially Adjacent Regions in Posterior Cingulate Cortex Represent Familiar Faces at Different Levels of Complexity. J Neurosci 2021; 41:9807-9826. [PMID: 34670848 PMCID: PMC8612644 DOI: 10.1523/jneurosci.1580-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 08/25/2021] [Accepted: 09/26/2021] [Indexed: 11/21/2022] Open
Abstract
Extensive research has shown that perceptual information of faces is processed in a network of hierarchically-organized areas within ventral temporal cortex. For familiar and famous faces, perceptual processing of faces is normally accompanied by extraction of semantic knowledge about the social status of persons. Semantic processing of familiar faces could entail progressive stages of information abstraction. However, the cortical mechanisms supporting multistage processing of familiar faces have not been characterized. Here, using an event-related fMRI experiment, familiar faces from four celebrity groups (actors, singers, politicians, and football players) and unfamiliar faces were presented to the human subjects (both males and females) while they were engaged in a face categorization task. We systematically explored the cortical representations for faces, familiar faces, subcategories of familiar faces, and familiar face identities using whole-brain univariate analysis, searchlight-based multivariate pattern analysis (MVPA), and functional connectivity analysis. Convergent evidence from all these analyses revealed a set of overlapping regions within posterior cingulate cortex (PCC) that contained decodable fMRI responses for representing different levels of semantic knowledge about familiar faces. Our results suggest a multistage pathway in PCC for processing semantic information of faces, analogous to the multistage pathway in ventral temporal cortex for processing perceptual information of faces.SIGNIFICANCE STATEMENT Recognizing familiar faces is an important component of social communications. Previous research has shown that a distributed network of brain areas is involved in processing the semantic information of familiar faces. However, it is not clear how different levels of semantic information are represented in the brain. Here, we evaluated the multivariate response patterns across the entire cortex to discover the areas that contain information for familiar faces, subcategories of familiar faces, and identities of familiar faces. The searchlight maps revealed that different levels of semantic information are represented in topographically adjacent areas within posterior cingulate cortex (PCC). The results suggest that semantic processing of faces is mediated through progressive stages of information abstraction in PCC.
Collapse
|
11
|
Wurm MF, Caramazza A. Two 'what' pathways for action and object recognition. Trends Cogn Sci 2021; 26:103-116. [PMID: 34702661 DOI: 10.1016/j.tics.2021.10.003] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/03/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
The ventral visual stream is conceived as a pathway for object recognition. However, we also recognize the actions an object can be involved in. Here, we show that action recognition critically depends on a pathway in lateral occipitotemporal cortex, partially overlapping and topographically aligned with object representations that are precursors for action recognition. By contrast, object features that are more relevant for object recognition, such as color and texture, are typically found in ventral occipitotemporal cortex. We argue that occipitotemporal cortex contains similarly organized lateral and ventral 'what' pathways for action and object recognition, respectively. This account explains a number of observed phenomena, such as the duplication of object domains and the specific representational profiles in lateral and ventral cortex.
Collapse
Affiliation(s)
- Moritz F Wurm
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy.
| | - Alfonso Caramazza
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy; Department of Psychology, Harvard University, 33 Kirkland St, Cambridge, MA 02138, USA
| |
Collapse
|
12
|
Qiu S, Mei G. Spontaneous recovery of adaptation aftereffects of natural facial categories. Vision Res 2021; 188:202-210. [PMID: 34365177 DOI: 10.1016/j.visres.2021.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 07/07/2021] [Accepted: 07/23/2021] [Indexed: 10/20/2022]
Abstract
Adaptation to a natural face attribute such as a happy face can bias the perception of a subsequent face in this dimension such as a neutral face. Such face adaptation aftereffects have been widely found in many natural facial categories. However, how temporally tuned mechanisms could control the temporal dynamics of natural face adaptation aftereffects remains unknown. To address the question, we used a deadaptation paradigm to examine whether the spontaneous recovery of natural facial aftereffects would emerge in four natural facial categories including variable categories (emotional expressions in Experiment 1 and eye gaze in Experiment 2) and invariable categories (facial gender in Experiment 3 and facial identity in Experiment 4). In the deadaptation paradigm, participants adapted to a face with an extreme attribute (such as a 100% angry face in Experiment 1) for a relatively long duration, and then deadapted to a face with an opposite extreme attribute (such as a 100% happy face in Experiment 1) for a relatively short duration. The time courses of face adaptation aftereffects were measured using a top-up manner. Deadaptation only masked the effects of initial longer-lasting adaptation, and the spontaneous recovery of adaptation aftereffects was observed at the post-test stage for all four natural facial categories. These results likely indicate that the temporal dynamics of adaptation aftereffects of natural facial categories may be controlled by multiple temporally tuned mechanisms.
Collapse
Affiliation(s)
- Shiming Qiu
- School of Psychology, Guizhou Normal University, Guiyang, PR China
| | - Gaoxing Mei
- School of Psychology, Guizhou Normal University, Guiyang, PR China.
| |
Collapse
|
13
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
14
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
15
|
Tsantani M, Cook R. Normal recognition of famous voices in developmental prosopagnosia. Sci Rep 2020; 10:19757. [PMID: 33184411 PMCID: PMC7661722 DOI: 10.1038/s41598-020-76819-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 11/03/2020] [Indexed: 02/06/2023] Open
Abstract
Developmental prosopagnosia (DP) is a condition characterised by lifelong face recognition difficulties. Recent neuroimaging findings suggest that DP may be associated with aberrant structure and function in multimodal regions of cortex implicated in the processing of both facial and vocal identity. These findings suggest that both facial and vocal recognition may be impaired in DP. To test this possibility, we compared the performance of 22 DPs and a group of typical controls, on closely matched tasks that assessed famous face and famous voice recognition ability. As expected, the DPs showed severe impairment on the face recognition task, relative to typical controls. In contrast, however, the DPs and controls identified a similar number of voices. Despite evidence of interactions between facial and vocal processing, these findings suggest some degree of dissociation between the two processing pathways, whereby one can be impaired while the other develops typically. A possible explanation for this dissociation in DP could be that the deficit originates in the early perceptual encoding of face structure, rather than at later, post-perceptual stages of face identity processing, which may be more likely to involve interactions with other modalities.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
16
|
Processing communicative facial and vocal cues in the superior temporal sulcus. Neuroimage 2020; 221:117191. [PMID: 32711066 DOI: 10.1016/j.neuroimage.2020.117191] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 11/20/2022] Open
Abstract
Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
Collapse
|
17
|
Chen W, Cheung OS. Flexible face processing: Holistic processing of facial identity is modulated by task-irrelevant facial expression. Vision Res 2020; 178:18-27. [PMID: 33075727 DOI: 10.1016/j.visres.2020.09.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 09/08/2020] [Accepted: 09/25/2020] [Indexed: 11/12/2022]
Abstract
Holistic processing is a hallmark of face perception and has been observed separately for facial identity and expression. While the identity of a face remains constant regardless of any changes in facial expressions, to what extent is holistic processing of facial identity affected by task-irrelevant facial expressions? If holistic processing is flexible and integrates both identity and expression information, the magnitude of holistic processing of facial identity may be systematically modulated by different facial expressions, due to either visual or emotional differences among the expressions. In Experiment 1, participants matched the identities of target halves of two sequentially presented face composites, with both composites showing either positive (happy) or negative (angry) expressions. The presentation duration for the test composite was either short (200 ms) or long (until a response). With a short presentation duration, the magnitude of holistic processing of identity for happy and angry composites was comparable. In contrast, with a long presentation duration, holistic processing of identity was reduced for angry compared with happy face composites. Experiment 2 replicated the results and showed reduced holistic processing of identity for face composites with either angry or neutral expressions, compared with happy expressions, given a long presentation duration. Because the modulation of facial expressions on holistic processing of facial identity was observed with long, but not short, presentation durations, these results suggest that such influence is unlikely solely due to visual differences, but may instead arise from cognitive evaluation of the emotions conveyed by facial expressions.
Collapse
Affiliation(s)
- Wei Chen
- Department of Psychology, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Olivia S Cheung
- Department of Psychology, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
18
|
Borowiak K, Maguinness C, von Kriegstein K. Dorsal-movement and ventral-form regions are functionally connected during visual-speech recognition. Hum Brain Mapp 2020; 41:952-972. [PMID: 31749219 PMCID: PMC7267922 DOI: 10.1002/hbm.24852] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 09/03/2019] [Accepted: 10/21/2019] [Indexed: 01/17/2023] Open
Abstract
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal-movement and ventral-form visual cortex regions. Here, we explored, for the first time, whether similar dorsal-ventral interactions (assessed via functional connectivity), might also exist for visual-speech processing. We then examined whether altered dorsal-ventral connectivity is observed in adults with high-functioning autism spectrum disorder (ASD), a disorder associated with impaired visual-speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal-movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral-form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT-right OFA, left TVSA-left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face-to-face communication.
Collapse
Affiliation(s)
- Kamila Borowiak
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Berlin School of Mind and Brain, Humboldt University of BerlinBerlinGermany
| | - Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
19
|
Elli GV, Lane C, Bedny M. A Double Dissociation in Sensitivity to Verb and Noun Semantics Across Cortical Networks. Cereb Cortex 2019; 29:4803-4817. [PMID: 30767007 DOI: 10.1093/cercor/bhz014] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 01/15/2019] [Accepted: 01/23/2019] [Indexed: 12/31/2022] Open
Abstract
What is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing, but what information do these networks represent? We used multivoxel pattern analysis (MVPA) to investigate whether these networks are sensitive to lexicosemantic distinctions among verbs and among nouns and, if so, whether they are more sensitive to distinctions among words in their preferred grammatical class. Participants heard 4 types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and 4 types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG+), and inferior frontal gyrus (LIFG) responded more to verbs, whereas the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in lexicosemantic sensitivity: classification was more accurate among verbs than nouns in the LMTG+, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the nonpreferred category in all regions. These results suggest that the lexicosemantic information about verbs and nouns is represented in partially nonoverlapping networks.
Collapse
Affiliation(s)
- Giulia V Elli
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Connor Lane
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
20
|
Faces and voices in the brain: A modality-general person-identity representation in superior temporal sulcus. Neuroimage 2019; 201:116004. [DOI: 10.1016/j.neuroimage.2019.07.017] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 05/17/2019] [Accepted: 07/07/2019] [Indexed: 11/18/2022] Open
|
21
|
Abstract
How do we learn what we know about others? Answering this question requires understanding the perceptual mechanisms with which we recognize individuals and their actions, and the processes by which the resulting perceptual representations lead to inferences about people's mental states and traits. This review discusses recent behavioral, neural, and computational studies that have contributed to this broad research program, encompassing both social perception and social cognition.
Collapse
Affiliation(s)
- Stefano Anzellotti
- Department of Psychology, Boston College, Boston, Massachusetts 02467, USA; ,
| | - Liane L Young
- Department of Psychology, Boston College, Boston, Massachusetts 02467, USA; ,
| |
Collapse
|
22
|
Soto FA, Vucovich LE, Ashby FG. Linking signal detection theory and encoding models to reveal independent neural representations from neuroimaging data. PLoS Comput Biol 2018; 14:e1006470. [PMID: 30273337 PMCID: PMC6181430 DOI: 10.1371/journal.pcbi.1006470] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 10/11/2018] [Accepted: 08/29/2018] [Indexed: 11/18/2022] Open
Abstract
Many research questions in visual perception involve determining whether stimulus properties are represented and processed independently. In visual neuroscience, there is great interest in determining whether important object dimensions are represented independently in the brain. For example, theories of face recognition have proposed either completely or partially independent processing of identity and emotional expression. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. This article develops a new quantitative framework that links signal detection theory from psychophysics and encoding models from computational neuroscience, focusing on a special form of independence defined in the psychophysics literature: perceptual separability. The new theory allowed us, for the first time, to precisely define separability of neural representations and to theoretically link behavioral and brain measures of separability. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In particular, the theory identifies exactly what valid inferences can be made about independent encoding of stimulus dimensions from the results of multivariate analyses of neuroimaging data and psychophysical studies. In addition, commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we apply this new framework to the study of separability of brain representations of face identity and emotional expression (neutral/sad) in a human fMRI study with male and female participants. A common question in vision research is whether certain stimulus properties, like face identity and expression, are represented and processed independently. We develop a theoretical framework that allowed us, for the first time, to link behavioral and brain measures of independence. Unlike previous approaches, our framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach in the study of independence. This allows to identify what kind of inferences can be made about brain representations from multivariate analyses of neuroimaging data or psychophysical studies. We apply this framework to the study of independent processing of face identity and expression.
Collapse
Affiliation(s)
- Fabian A. Soto
- Department of Psychology, Florida International University, Miami, Florida, United States of America
- * E-mail:
| | - Lauren E. Vucovich
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| | - F. Gregory Ashby
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
23
|
Goal-relevant situations facilitate memory of neutral faces. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 18:1269-1282. [PMID: 30264337 DOI: 10.3758/s13415-018-0637-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Emotional situations are typically better remembered than neutral situations, but the psychological conditions and brain mechanisms underlying this effect remain debated. Stimulus valence and affective arousal have been suggested to explain the major role of emotional stimuli in memory facilitation. However, neither valence nor arousal are sufficient affective dimensions to explain the effect of memory facilitation. Several studies showed that negative and positive details are better remembered than neutral details. However, other studies showed that neutral information encoded and coupled with arousal did not result in a memory advantage compared with neutral information not coupled with arousal. Therefore, we suggest that the fundamental affective dimension responsible for memory facilitation is goal relevance. To test this hypothesis at behavioral and neural levels, we conducted a functional magnetic resonance imaging study and used neutral faces embedded in goal-relevant or goal-irrelevant daily life situations. At the behavioral level, we found that neutral faces encountered in goal-relevant situations were better remembered than those encountered in goal-irrelevant situations. To explain this effect, we studied neural activations involved in goal-relevant processing at encoding and in subsequent neutral face recognition. At encoding, activation of emotional brain regions (anterior cingulate, ventral striatum, ventral tegmental area, and substantia nigra) was greater for processing of goal-relevant situations than for processing of goal-irrelevant situations. At the recognition phase, despite the presentation of neutral faces, brain activation involved in social processing (superior temporal sulcus) to successfully remember identities was greater for previously encountered faces in goal-relevant than in goal-irrelevant situations.
Collapse
|
24
|
The neural network for face recognition: Insights from an fMRI study on developmental prosopagnosia. Neuroimage 2017; 169:151-161. [PMID: 29242103 DOI: 10.1016/j.neuroimage.2017.12.023] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Revised: 11/23/2017] [Accepted: 12/10/2017] [Indexed: 12/18/2022] Open
Abstract
Face recognition is supported by collaborative work of multiple face-responsive regions in the brain. Based on findings from individuals with normal face recognition ability, a neural model has been proposed with the occipital face area (OFA), fusiform face area (FFA), and face-selective posterior superior temporal sulcus (pSTS) as the core face network (CFN) and the rest of the face-responsive regions as the extended face network (EFN). However, little is known about how these regions work collaboratively for face recognition in our daily life. Here we focused on individuals suffering developmental prosopagnosia (DP), a neurodevelopmental disorder specifically impairing face recognition, to shed light on the infrastructure of the neural model of face recognition. Specifically, we used a variant of global brain connectivity method to comprehensively explore resting-state functional connectivity (FC) among face-responsive regions in a large sample of DPs (N = 64). We found that both the FCs within the CFN and those between the CFN and EFN were largely reduced in DP. Importantly, the right OFA and FFA served as the dysconnectivity hubs within the CFN, i.e., FCs concerning these two regions within the CFN were largely disrupted. In addition, DPs' right FFA also showed reduced FCs with the EFN. Moreover, these disrupted FCs were related to DP's behavioral deficit in face recognition, with the FCs from the FFA to the anterior temporal lobe (ATL) and pSTS the most predictive. Based on these findings, we proposed a revised neural model of face recognition demonstrating the relatedness of interactions among face-responsive regions to face recognition.
Collapse
|
25
|
Anzellotti S, Caramazza A, Saxe R. Multivariate pattern dependence. PLoS Comput Biol 2017; 13:e1005799. [PMID: 29155809 PMCID: PMC5714382 DOI: 10.1371/journal.pcbi.1005799] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2017] [Revised: 12/04/2017] [Accepted: 09/27/2017] [Indexed: 01/22/2023] Open
Abstract
When we perform a cognitive task, multiple brain regions are engaged. Understanding how these regions interact is a fundamental step to uncover the neural bases of behavior. Most research on the interactions between brain regions has focused on the univariate responses in the regions. However, fine grained patterns of response encode important information, as shown by multivariate pattern analysis. In the present article, we introduce and apply multivariate pattern dependence (MVPD): a technique to study the statistical dependence between brain regions in humans in terms of the multivariate relations between their patterns of responses. MVPD characterizes the responses in each brain region as trajectories in region-specific multidimensional spaces, and models the multivariate relationship between these trajectories. We applied MVPD to the posterior superior temporal sulcus (pSTS) and to the fusiform face area (FFA), using a searchlight approach to reveal interactions between these seed regions and the rest of the brain. Across two different experiments, MVPD identified significant statistical dependence not detected by standard functional connectivity. Additionally, MVPD outperformed univariate connectivity in its ability to explain independent variance in the responses of individual voxels. In the end, MVPD uncovered different connectivity profiles associated with different representational subspaces of FFA: the first principal component of FFA shows differential connectivity with occipital and parietal regions implicated in the processing of low-level properties of faces, while the second and third components show differential connectivity with anterior temporal regions implicated in the processing of invariant representations of face identity. Human behavior is supported by systems of brain regions that exchange information to complete a task. This exchange of information between brain regions leads to statistical relationships between their responses over time. Most likely, these relationships do not link only the mean responses in two brain regions, but also their finer spatial patterns. Analyzing finer response patterns has been a key advance in the study of responses within individual regions, and can be leveraged to study between-region interactions. To capture the overall statistical relationship between two brain regions, we need to describe each region’s responses with respect to dimensions that best account for the variation in that region over time. These dimensions can be different from region to region. We introduce an approach in which each region’s responses are characterized in terms of region-specific dimensions that best account for its responses, and the relationships between regions are modeled with multivariate linear models. We demonstrate that this approach provides a better account of the data as compared to standard functional connectivity in two different experiments, and we use it to discover multiple dimensions within the fusiform face area that have different connectivity profiles with the rest of the brain.
Collapse
Affiliation(s)
- Stefano Anzellotti
- Brain and Cognitive Sciences Department, MIT, Cambridge, Massachusetts, United States of America
- * E-mail:
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, Massachusetts, United States of America
| | - Rebecca Saxe
- Brain and Cognitive Sciences Department, MIT, Cambridge, Massachusetts, United States of America
| |
Collapse
|
26
|
Cacioppo S, Juan E, Monteleone G. Predicting Intentions of a Familiar Significant Other Beyond the Mirror Neuron System. Front Behav Neurosci 2017; 11:155. [PMID: 28890691 PMCID: PMC5574908 DOI: 10.3389/fnbeh.2017.00155] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 08/04/2017] [Indexed: 01/08/2023] Open
Abstract
Inferring intentions of others is one of the most intriguing issues in interpersonal interaction. Theories of embodied cognition and simulation suggest that this mechanism takes place through a direct and automatic matching process that occurs between an observed action and past actions. This process occurs via the reactivation of past self-related sensorimotor experiences within the inferior frontoparietal network (including the mirror neuron system, MNS). The working model is that the anticipatory representations of others' behaviors require internal predictive models of actions formed from pre-established, shared representations between the observer and the actor. This model suggests that observers should be better at predicting intentions performed by a familiar actor, rather than a stranger. However, little is known about the modulations of the intention brain network as a function of the familiarity between the observer and the actor. Here, we combined functional magnetic resonance imaging (fMRI) with a behavioral intention inference task, in which participants were asked to predict intentions from three types of actors: A familiar actor (their significant other), themselves (another familiar actor), and a non-familiar actor (a stranger). Our results showed that the participants were better at inferring intentions performed by familiar actors than non-familiar actors and that this better performance was associated with greater activation within and beyond the inferior frontoparietal network i.e., in brain areas related to familiarity (e.g., precuneus). In addition, and in line with Hebbian principles of neural modulations, the more the participants reported being cognitively close to their partner, the less the brain areas associated with action self-other comparison (e.g., inferior parietal lobule), attention (e.g., superior parietal lobule), recollection (hippocampus), and pair bond (ventral tegmental area, VTA) were recruited, suggesting that the more a shared mental representation has been pre-established, the more neurons show suppression in their response to the presentation of information to which they are sensitive. These results suggest that the relation of performance to the extent of neural activation during intention understanding may display differential relationships based on the cognitive domain, brain region, and the cognitive interdependence between the observer and the actor.
Collapse
Affiliation(s)
- Stephanie Cacioppo
- Pritzker School of Medicine, Biological Science Division, Department of Psychiatry and Behavioral Neuroscience, University of ChicagoChicago, IL, United States.,High-Performance Electrical NeuroImaging Laboratory, Center for Cognitive and Social Neuroscience, CCSN, University of ChicagoChicago, IL, United States
| | - Elsa Juan
- Department of Psychology, University of GenevaGeneva, Switzerland
| | - George Monteleone
- High-Performance Electrical NeuroImaging Laboratory, Center for Cognitive and Social Neuroscience, CCSN, University of ChicagoChicago, IL, United States
| |
Collapse
|
27
|
Anterior temporal lobe and the representation of knowledge about people. Proc Natl Acad Sci U S A 2017; 114:4042-4044. [PMID: 28377512 DOI: 10.1073/pnas.1703438114] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|