1
|
Oswald F, Samra SK. A scoping review and index of body stimuli in psychological science. Behav Res Methods 2024; 56:5434-5455. [PMID: 38030921 DOI: 10.3758/s13428-023-02278-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2023] [Indexed: 12/01/2023]
Abstract
Naturalistic body stimuli are necessary for understanding many aspects of human psychology, yet there are no centralized databases of body stimuli. Furthermore, there are a high number of independently developed stimulus sets lacking in standardization and reproducibility potential, and a general lack of organization, contributing to issues of both replicability and generalizability in body-related research. We conducted a comprehensive scoping review to index and explore existing naturalistic whole-body stimuli. Our research questions were as follows: (1) What sets of naturalistic human whole-body stimuli are present in the literature? And (2) On what factors (e.g., demographics, emotion expression) do these stimuli vary? To be included, stimulus sets had to (1) include human bodies as stimuli; (2) be photographs, videos, or other depictions of real human bodies (not computer generated, drawn, etc.); (3) include the whole body (defined as torso, arms, and legs); and (4) could include edited images, but still had to be recognizable as human bodies. We identified a relatively large number of existing stimulus sets (N = 79) which offered relative variability in terms of main manipulated factors and the degree of visual information included (i.e., inclusion of heads and/or faces). However, stimulus sets were demographically homogenous, skewed towards White, young adult, and female bodies. We identified significant issues in reporting and availability practices, posing a challenge to the generalizability, reliability, and reproducibility of body-related research. Accordingly, we urge researchers to adopt transparent and accessible practices and to take steps to diversify body stimuli.
Collapse
Affiliation(s)
- Flora Oswald
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA.
- Department of Women's, Gender, and Sexuality Studies, The Pennsylvania State University, University Park, PA, USA.
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA.
- Department of Psychology, University of Denver, Denver, CO, USA.
| | | |
Collapse
|
2
|
Tanaka H, Jiang P. P1, N170, and N250 Event-related Potential Components Reflect Temporal Perception Processing in Face and Body Personal Identification. J Cogn Neurosci 2024; 36:1265-1281. [PMID: 38652104 DOI: 10.1162/jocn_a_02167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Human faces and bodies represent various socially important signals. Although adults encounter numerous new people in daily life, they can recognize hundreds to thousands of different individuals. However, the neural mechanisms that differentiate one person from another person are unclear. This study aimed to clarify the temporal dynamics of the cognitive processes of face and body personal identification using face-sensitive ERP components (P1, N170, and N250). The present study performed three blocks (face-face, face-body, and body-body) of different ERP adaptation paradigms. Furthermore, in the above three blocks, ERP components were used to compare brain biomarkers under three conditions (same person, different person of the same sex, and different person of the opposite sex). The results showed that the P1 amplitude for the face-face block was significantly greater than that for the body-body block, that the N170 amplitude for a different person of the same sex condition was greater than that for the same person condition in the right hemisphere only, and that the N250 amplitude gradually increased as the degree of face and body sex-social categorization grew closer (i.e., same person condition > different person of the same sex condition > different person of the opposite sex condition). These results suggest that early processing of the face and body processes the face and body separately and that structural encoding and personal identification of the face and body process the face and body collaboratively.
Collapse
Affiliation(s)
| | - Peilun Jiang
- Kanazawa University Graduate School, Kanazawa City, Japan
| |
Collapse
|
3
|
Arioli M, Segatta C, Papagno C, Tettamanti M, Cattaneo Z. Social perception in deaf individuals: A meta-analysis of neuroimaging studies. Hum Brain Mapp 2023; 44:5402-5415. [PMID: 37609693 PMCID: PMC10543108 DOI: 10.1002/hbm.26444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/22/2023] [Accepted: 07/22/2023] [Indexed: 08/24/2023] Open
Abstract
Deaf individuals may report difficulties in social interactions. However, whether these difficulties depend on deafness affecting social brain circuits is controversial. Here, we report the first meta-analysis comparing brain activations of hearing and (prelingually) deaf individuals during social perception. Our findings showed that deafness does not impact on the functional mechanisms supporting social perception. Indeed, both deaf and hearing control participants recruited regions of the action observation network during performance of different social tasks employing visual stimuli, and including biological motion perception, face identification, action observation, viewing, identification and memory for signs and lip reading. Moreover, we found increased recruitment of the superior-middle temporal cortex in deaf individuals compared with hearing participants, suggesting a preserved and augmented function during social communication based on signs and lip movements. Overall, our meta-analysis suggests that social difficulties experienced by deaf individuals are unlikely to be associated with brain alterations but may rather depend on non-supportive environments.
Collapse
Affiliation(s)
- Maria Arioli
- Department of Human and Social SciencesUniversity of BergamoBergamoItaly
| | - Cecilia Segatta
- Department of Human and Social SciencesUniversity of BergamoBergamoItaly
| | - Costanza Papagno
- Center for Mind/Brain Sciences (CIMeC)University of TrentoTrentoItaly
| | | | - Zaira Cattaneo
- Department of Human and Social SciencesUniversity of BergamoBergamoItaly
- IRCCS Mondino FoundationPaviaItaly
| |
Collapse
|
4
|
Levakov G, Sporns O, Avidan G. Fine-scale dynamics of functional connectivity in the face-processing network during movie watching. Cell Rep 2023; 42:112585. [PMID: 37285265 DOI: 10.1016/j.celrep.2023.112585] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 03/02/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2023] Open
Abstract
Mapping the human face-processing network is typically done during rest or using isolated, static face images, overlooking widespread cortical interactions obtained in response to naturalistic face dynamics and context. To determine how inter-subject functional correlation (ISFC) relates to face recognition scores, we measure cortical connectivity patterns in response to a dynamic movie in typical adults (N = 517). We find a positive correlation with recognition scores in edges connecting the occipital visual and anterior temporal regions and a negative correlation in edges connecting the attentional dorsal, frontal default, and occipital visual regions. We measure the inter-subject stimulus-evoked response at a single TR resolution and demonstrate that co-fluctuations in face-selective edges are related to activity in core face-selective regions and that the ISFC patterns peak during boundaries between movie segments rather than during the presence of faces. Our approach demonstrates how face processing is linked to fine-scale dynamics in attentional, memory, and perceptual neural circuitry.
Collapse
Affiliation(s)
- Gidon Levakov
- Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| | - Olaf Sporns
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Galia Avidan
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
5
|
Ward IL, Raven EP, de la Rosa S, Jones DK, Teufel C, von dem Hagen E. White matter microstructure in face and body networks predicts facial expression and body posture perception across development. Hum Brain Mapp 2023; 44:2307-2322. [PMID: 36661194 PMCID: PMC10028674 DOI: 10.1002/hbm.26211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 12/05/2022] [Accepted: 01/07/2023] [Indexed: 01/21/2023] Open
Abstract
Facial expression and body posture recognition have protracted developmental trajectories. Interactions between face and body perception, such as the influence of body posture on facial expression perception, also change with development. While the brain regions underpinning face and body processing are well-defined, little is known about how white-matter tracts linking these regions relate to perceptual development. Here, we obtained complementary diffusion magnetic resonance imaging (MRI) measures (fractional anisotropy [FA], spherical mean Ṧμ ), and a quantitative MRI myelin-proxy measure (R1), within white-matter tracts of face- and body-selective networks in children and adolescents and related these to perceptual development. In tracts linking occipital and fusiform face areas, facial expression perception was predicted by age-related maturation, as measured by Ṧμ and R1, as well as age-independent individual differences in microstructure, captured by FA and R1. Tract microstructure measures linking posterior superior temporal sulcus body region with anterior temporal lobe (ATL) were related to the influence of body on facial expression perception, supporting ATL as a site of face and body network convergence. Overall, our results highlight age-dependent and age-independent constraints that white-matter microstructure poses on perceptual abilities during development and the importance of complementary microstructural measures in linking brain structure and behaviour.
Collapse
Affiliation(s)
- Isobel L Ward
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
| | - Erika P Raven
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
- Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA
| | | | - Derek K Jones
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
| | - Christoph Teufel
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
| | - Elisabeth von dem Hagen
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, UK
| |
Collapse
|
6
|
Hu Y, O'Toole AJ. First impressions: Integrating faces and bodies in personality trait perception. Cognition 2023; 231:105309. [PMID: 36347653 DOI: 10.1016/j.cognition.2022.105309] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 10/05/2022] [Accepted: 10/12/2022] [Indexed: 11/07/2022]
Abstract
Faces and bodies spontaneously elicit personality trait judgments (e.g., trustworthy, dominant, lazy). We examined how trait information from the face and body combine to form first impressions of the whole person and whether trait judgments from the face and body are affected by seeing the whole person. Consistent with the trait-dependence hypothesis, Experiment 1 showed that the relative contribution of the face and body to whole-person perception varied with the trait judged. Agreeableness traits (e.g., warm, aggressive, sympathetic, trustworthy) were inferred primarily from the face, conscientiousness traits (e.g., dependable, careless) from the body, and extraversion traits (e.g., dominant, quiet, confident) from the whole person. A control experiment showed that both clothing and body shape contributed to whole-person judgments. In Experiment 2, we found that a face (body) rated in the whole person elicited a different rating than when it was rated in isolation. Specifically, when trait ratings differed for an isolated face and body of the same identity, the whole-person context biased in-context ratings of the faces and bodies towards the ratings of the context. These results showed that face and body trait perception interact more than previously assumed. We combine current and established findings to propose a novel framework to account for face-body integration in trait perception. This framework incorporates basic elements such as perceptual determinants, nonperceptual determinants, trait formation, and integration, as well as predictive factors such as the rater, the person rated, and the situation.
Collapse
Affiliation(s)
- Ying Hu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | | |
Collapse
|
7
|
Fan X, Guo Q, Zhang X, Fei L, He S, Weng X. Top-down modulation and cortical-AMG/HPC interaction in familiar face processing. Cereb Cortex 2022; 33:4677-4687. [PMID: 36156127 DOI: 10.1093/cercor/bhac371] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/24/2022] [Accepted: 08/25/2022] [Indexed: 11/13/2022] Open
Abstract
Humans can accurately recognize familiar faces in only a few hundred milliseconds, but the underlying neural mechanism remains unclear. Here, we recorded intracranial electrophysiological signals from ventral temporal cortex (VTC), superior/middle temporal cortex (STC/MTC), medial parietal cortex (MPC), and amygdala/hippocampus (AMG/HPC) in 20 epilepsy patients while they viewed faces of famous people and strangers as well as common objects. In posterior VTC and MPC, familiarity-sensitive responses emerged significantly later than initial face-selective responses, suggesting that familiarity enhances face representations after they are first being extracted. Moreover, viewing famous faces increased the coupling between cortical areas and AMG/HPC in multiple frequency bands. These findings advance our understanding of the neural basis of familiar face perception by identifying the top-down modulation in local face-selective response and interactions between cortical face areas and AMG/HPC.
Collapse
Affiliation(s)
- Xiaoxu Fan
- Department of Psychology, University of Washington, Seattle, WA, 98105, United States
| | - Qiang Guo
- Epilepsy Center, Guangdong Sanjiu Brain Hospital, Guangzhou, Guangdong, 510510, China
| | - Xinxin Zhang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education,Guangzhou, Guangdong, 510898, China
| | - Lingxia Fei
- Epilepsy Center, Guangdong Sanjiu Brain Hospital, Guangzhou, Guangdong, 510510, China
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Xuchu Weng
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education,Guangzhou, Guangdong, 510898, China
| |
Collapse
|
8
|
Balgova E, Diveica V, Walbrin J, Binney RJ. The role of the ventrolateral anterior temporal lobes in social cognition. Hum Brain Mapp 2022; 43:4589-4608. [PMID: 35716023 PMCID: PMC9491293 DOI: 10.1002/hbm.25976] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 05/27/2022] [Accepted: 05/28/2022] [Indexed: 11/24/2022] Open
Abstract
A key challenge for neurobiological models of social cognition is to elucidate whether brain regions are specialised for that domain. In recent years, discussion surrounding the role of anterior temporal regions epitomises such debates; some argue the anterior temporal lobe (ATL) is part of a domain‐specific network for social processing, while others claim it comprises a domain‐general hub for semantic representation. In the present study, we used ATL‐optimised fMRI to map the contribution of different ATL structures to a variety of paradigms frequently used to probe a crucial social ability, namely ‘theory of mind’ (ToM). Using multiple tasks enables a clearer attribution of activation to ToM as opposed to idiosyncratic features of stimuli. Further, we directly explored whether these same structures are also activated by a non‐social task probing semantic representations. We revealed that common to all of the tasks was activation of a key ventrolateral ATL region that is often invisible to standard fMRI. This constitutes novel evidence in support of the view that the ventrolateral ATL contributes to social cognition via a domain‐general role in semantic processing and against claims of a specialised social function.
Collapse
Affiliation(s)
- Eva Balgova
- School of Human and Behavioural Sciences, Bangor University, Gwynedd, Wales, UK
| | - Veronica Diveica
- School of Human and Behavioural Sciences, Bangor University, Gwynedd, Wales, UK
| | - Jon Walbrin
- Faculdade de Psicologia e de Ciências da Educação, Universidade de Coimbra, Portugal
| | - Richard J Binney
- School of Human and Behavioural Sciences, Bangor University, Gwynedd, Wales, UK
| |
Collapse
|
9
|
Tidoni E, Holle H, Scandola M, Schindler I, Hill L, Cross ES. Human but not robotic gaze facilitates action prediction. iScience 2022; 25:104462. [PMID: 35707718 PMCID: PMC9189121 DOI: 10.1016/j.isci.2022.104462] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/05/2022] [Accepted: 05/17/2022] [Indexed: 01/08/2023] Open
Abstract
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do) information from gaze and directional cues performed by humans, human-like robots, and a non-human-like object. People were faster to infer the mental content of human agents compared to robotic agents. Furthermore, although the absence of differences in control conditions rules out the use of non-mentalizing strategies, the human-like appearance of non-human agents may engage mentalizing processes to solve the task. Overall, results suggest that human-like robotic actions may be processed differently from humans’ and objects’ behavior. These findings inform our understanding of the relevance of an object’s physical features in triggering mentalizing abilities and its relevance for human–robot interaction. People differently ascribe mental content to human-like and non-human-like agents A human-like shape may automatically engage mentalizing processes Human actions are interpreted faster than non-human actions
Collapse
|
10
|
Foster C. A Distributed Model of Face and Body Integration. Neurosci Insights 2022; 17:26331055221119221. [PMID: 35991808 PMCID: PMC9386443 DOI: 10.1177/26331055221119221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/26/2022] [Indexed: 11/17/2022] Open
Abstract
Separated face- and body-responsive brain networks have been identified that show strong responses when observers view faces and bodies. It has been proposed that face and body processing may be initially separated in the lateral occipitotemporal cortex and then combined into a whole person representation in the anterior temporal cortex, or elsewhere in the brain. However, in contrast to this proposal, our recent study identified a common coding of face and body orientation (ie, facing direction) in the lateral occipitotemporal cortex, demonstrating an integration of face and body information at an early stage of face and body processing. These results, in combination with findings that show integration of face and body identity in the lateral occipitotemporal, parahippocampal and superior parietal cortex, and face and body emotional expression in the posterior superior temporal sulcus and medial prefrontal cortex, suggest that face and body integration may be more distributed than previously considered. I propose a new model of face and body integration, where areas at the intersection of face- and body-responsive regions play a role in integrating specific properties of faces and bodies, and distributed regions across the brain contribute to high-level, abstract integration of shared face and body properties.
Collapse
Affiliation(s)
- Celia Foster
- Biopsychology and Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany
- Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
11
|
One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct 2021; 227:1423-1438. [PMID: 34792643 DOI: 10.1007/s00429-021-02420-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 10/19/2022]
Abstract
Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.
Collapse
|
12
|
Johnstone LT, Karlsson EM, Carey DP. Left-Handers Are Less Lateralized Than Right-Handers for Both Left and Right Hemispheric Functions. Cereb Cortex 2021; 31:3780-3787. [PMID: 33884412 PMCID: PMC8824548 DOI: 10.1093/cercor/bhab048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 01/25/2021] [Accepted: 02/11/2021] [Indexed: 11/13/2022] Open
Abstract
Many neuroscientific techniques have revealed that more left- than right-handers will have unusual cerebral asymmetries for language. After the original emphasis on frequency in the aphasia and epilepsy literatures, most neuropsychology, and neuroimaging efforts rely on estimates of central tendency to compare these two handedness groups on any given measure of asymmetry. The inevitable reduction in mean lateralization in the left-handed group is often postulated as being due to reversed asymmetry in a small subset of them, but it could also be due to a reduced asymmetry in many of the left-handers. These two possibilities have hugely different theoretical interpretations. Using functional magnetic resonance imaging localizer paradigms, we matched left- and right-handers for hemispheric dominance across four functions (verbal fluency, face perception, body perception, and scene perception). We then compared the degree of dominance between the two handedness groups for each of these four measures, conducting t-tests on the mean laterality indices. The results demonstrate that left-handers with typical cerebral asymmetries are less lateralized for language, faces, and bodies than their right-handed counterparts. These results are difficult to reconcile with current theories of language asymmetry or of handedness.
Collapse
Affiliation(s)
- Leah T Johnstone
- School of Psychology, Perception, Action and Memory Research Group, Bangor Imaging Group, Bangor University, Bangor, LL59 2AS, UK.,Sport Psychology Group, UCFB, Manchester, M11 3FF, UK
| | - Emma M Karlsson
- School of Psychology, Perception, Action and Memory Research Group, Bangor Imaging Group, Bangor University, Bangor, LL59 2AS, UK
| | - David P Carey
- School of Psychology, Perception, Action and Memory Research Group, Bangor Imaging Group, Bangor University, Bangor, LL59 2AS, UK
| |
Collapse
|
13
|
Semantic Knowledge of Famous People and Places Is Represented in Hippocampus and Distinct Cortical Networks. J Neurosci 2021; 41:2762-2779. [PMID: 33547163 DOI: 10.1523/jneurosci.2034-19.2021] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 01/14/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
Studies have found that anterior temporal lobe (ATL) is critical for detailed knowledge of object categories, suggesting that it has an important role in semantic memory. However, in addition to information about entities, such as people and objects, semantic memory also encompasses information about places. We tested predictions stemming from the PMAT model, which proposes there are distinct systems that support different kinds of semantic knowledge: an anterior temporal (AT) network, which represents information about entities; and a posterior medial (PM) network, which represents information about places. We used representational similarity analysis to test for activation of semantic features when human participants viewed pictures of famous people and places, while controlling for visual similarity. We used machine learning techniques to quantify the semantic similarity of items based on encyclopedic knowledge in the Wikipedia page for each item and found that these similarity models accurately predict human similarity judgments. We found that regions within the AT network, including ATL and inferior frontal gyrus, represented detailed semantic knowledge of people. In contrast, semantic knowledge of places was represented within PM network areas, including precuneus, posterior cingulate cortex, angular gyrus, and parahippocampal cortex. Finally, we found that hippocampus, which has been proposed to serve as an interface between the AT and PM networks, represented fine-grained semantic similarity for both individual people and places. Our results provide evidence that semantic knowledge of people and places is represented separately in AT and PM areas, whereas hippocampus represents semantic knowledge of both categories.SIGNIFICANCE STATEMENT Humans acquire detailed semantic knowledge about people (e.g., their occupation and personality) and places (e.g., their cultural or historical significance). While research has demonstrated that brain regions preferentially respond to pictures of people and places, less is known about whether these regions preferentially represent semantic knowledge about specific people and places. We used machine learning techniques to develop a model of semantic similarity based on information available from Wikipedia, validating the model against similarity ratings from human participants. Using our computational model, we found that semantic knowledge about people and places is represented in distinct anterior temporal and posterior medial brain networks, respectively. We further found that hippocampus, an important memory center, represented semantic knowledge for both types of stimuli.
Collapse
|
14
|
McCullough S, Emmorey K. Effects of deafness and sign language experience on the human brain: voxel-based and surface-based morphometry. LANGUAGE, COGNITION AND NEUROSCIENCE 2021; 36:422-439. [PMID: 33959670 PMCID: PMC8096161 DOI: 10.1080/23273798.2020.1854793] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We investigated how deafness and sign language experience affect the human brain by comparing neuroanatomical structures across congenitally deaf signers (n = 30), hearing native signers (n = 30), and hearing sign-naïve controls (n = 30). Both voxel-based and surface-based morphometry results revealed deafness-related structural changes in visual cortices (grey matter), right frontal lobe (gyrification), and left Heschl's gyrus (white matter). The comparisons also revealed changes associated with lifelong signing experience: expansions in the surface area within left anterior temporal and left occipital lobes, and a reduction in cortical thickness in the right occipital lobe for deaf and hearing signers. Structural changes within these brain regions may be related to adaptations in the neural networks involved in processing signed language (e.g. visual perception of face and body movements). Hearing native signers also had unique neuroanatomical changes (e.g. reduced gyrification in premotor areas), perhaps due to lifelong experience with both a spoken and a signed language.
Collapse
Affiliation(s)
- Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, CA, USA
| |
Collapse
|
15
|
The occipital face area is causally involved in identity-related visual-semantic associations. Brain Struct Funct 2020; 225:1483-1493. [PMID: 32342226 PMCID: PMC7286950 DOI: 10.1007/s00429-020-02068-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 04/11/2020] [Indexed: 01/07/2023]
Abstract
Faces are processed in a network of areas within regions of the ventral visual stream. However, familiar faces typically are characterized by additional associated information, such as episodic memories or semantic biographical information as well. The acquisition of such non-sensory, identity-specific knowledge plays a crucial role in our ability to recognize and identify someone we know. The occipital face area (OFA), an early part of the core face-processing network, is recently found to be involved in the formation of identity-specific memory traces but it is currently unclear if this role is limited to unimodal visual information. The current experiments used transcranial magnetic stimulation (TMS) to test whether the OFA is involved in the association of a face with identity-specific semantic information, such as the name or job title of a person. We applied an identity-learning task where unfamiliar faces were presented together with a name and a job title in the first encoding phase. Simultaneously, TMS pulses were applied either to the left or right OFA or to Cz, as a control. In the subsequent retrieval phase, the previously seen faces were presented either with two names or with two job titles and the task of the participants was to select the semantic information previously learned. We found that the stimulation of the right or left OFA reduced subsequent retrieval performance for the face-associated job titles. This suggests a causal role of the OFA in the association of faces and related semantic information. Furthermore, in contrast to prior findings, we did not observe hemispherical differences of the TMS intervention, suggesting a similar role of the left and right OFAs in the formation of the visual-semantic associations. Our results suggest the necessity to reconsider the hierarchical face-perception models and support the distributed and recurrent models.
Collapse
|
16
|
Integrating faces and bodies: Psychological and neural perspectives on whole person perception. Neurosci Biobehav Rev 2020; 112:472-486. [PMID: 32088346 DOI: 10.1016/j.neubiorev.2020.02.021] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 12/19/2019] [Accepted: 02/15/2020] [Indexed: 11/20/2022]
Abstract
The human "person" is a common percept we encounter. Research on person perception has been focused either on face or body perception-with less attention paid to whole person perception. We review psychological and neuroscience studies aimed at understanding how face and body processing operate in concert to support intact person perception. We address this question considering: a.) the task to be accomplished (identification, emotion processing, detection), b.) the neural stage of processing (early/late visual mechanisms), and c.) the relevant brain regions for face/body/person processing. From the psychological perspective, we conclude that the integration of faces and bodies is mediated by the goal of the processing (e.g., emotion analysis, identification, etc.). From the neural perspective, we propose a hierarchical functional neural architecture of face-body integration that retains a degree of separation between the dorsal and ventral visual streams. We argue for two centers of integration: a ventral semantic integration hub that is the result of progressive, posterior-to-anterior, face-body integration; and a social agent integration hub in the dorsal stream STS.
Collapse
|
17
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|
18
|
Foster C, Zhao M, Romero J, Black MJ, Mohler BJ, Bartels A, Bülthoff I. Decoding subcategories of human bodies from both body- and face-responsive cortical regions. Neuroimage 2019; 202:116085. [PMID: 31401238 DOI: 10.1016/j.neuroimage.2019.116085] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 07/17/2019] [Accepted: 08/07/2019] [Indexed: 11/19/2022] Open
Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.
Collapse
Affiliation(s)
- Celia Foster
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Max Planck Institute for Intelligent Systems, Tübingen, Germany; Centre for Integrative Neuroscience, Tübingen, Germany.
| | - Mintao Zhao
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; School of Psychology, University of East Anglia, UK
| | - Javier Romero
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael J Black
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Andreas Bartels
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Centre for Integrative Neuroscience, Tübingen, Germany; Department of Psychology, University of Tübingen, Germany; Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | | |
Collapse
|
19
|
Rice GE, Hoffman P, Binney RJ, Lambon Ralph MA. Concrete versus abstract forms of social concept: an fMRI comparison of knowledge about people versus social terms. Philos Trans R Soc Lond B Biol Sci 2019; 373:rstb.2017.0136. [PMID: 29915004 PMCID: PMC6015823 DOI: 10.1098/rstb.2017.0136] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2018] [Indexed: 12/14/2022] Open
Abstract
The anterior temporal lobes (ATLs) play a key role in conceptual knowledge representation. The hub-and-spoke theory suggests that the contribution of the ATLs to semantic representation is (a) transmodal, i.e. integrating information from multiple sensorimotor and verbal modalities, and (b) pan-categorical, representing concepts from all categories. Another literature, however, suggests that this region's responses are modality- and category-selective; prominent examples include category selectivity for socially relevant concepts and face recognition. The predictions of each approach have never been directly compared. We used data from three studies to compare category-selective responses within the ATLs. Study 1 compared ATL responses to famous people versus another conceptual category (landmarks) from visual versus auditory inputs. Study 2 compared ATL responses to famous people from pictorial and written word inputs. Study 3 compared ATL responses to a different kind of socially relevant stimuli, namely abstract non-person-related words, in order to ascertain whether ATL subregions are engaged for social concepts more generally or only for person-related knowledge. Across all three studies a dominant bilateral ventral ATL cluster responded to all categories in all modalities. Anterior to this ‘pan-category’ transmodal region, a second cluster responded more weakly overall yet selectively for people, but did so equally for spoken names and faces (Study 1). A third region in the anterior superior temporal gyrus responded selectively to abstract socially relevant words (Study 3), but did not respond to concrete socially relevant words (i.e. written names; Study 2). These findings can be accommodated by the graded hub-and-spoke model of concept representation. On this view, the ventral ATL is the centre point of a bilateral ATL hub, which contributes to conceptual representation through transmodal distillation of information arising from multiple modality-specific association cortices. Partial specialization occurs across the graded ATL hub as a consequence of gradedly differential connectivity across the region. This article is part of the theme issue ‘Varieties of abstract concepts: development, use and representation in the brain’.
Collapse
Affiliation(s)
- Grace E Rice
- Neuroscience and Aphasia Research Unit (NARU), University of Manchester, Manchester, UK
| | - Paul Hoffman
- Centre for Cognitive Ageing and Cognitive Epidemiology (CCACE), Department of Psychology, University of Edinburgh, Edinburgh, UK
| | | | | |
Collapse
|
20
|
Teufel C, Westlake MF, Fletcher PC, von dem Hagen E. A hierarchical model of social perception: Psychophysical evidence suggests late rather than early integration of visual information from facial expression and body posture. Cognition 2019; 185:131-143. [PMID: 30684782 PMCID: PMC6420341 DOI: 10.1016/j.cognition.2018.12.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 12/13/2018] [Accepted: 12/17/2018] [Indexed: 11/26/2022]
Abstract
Facial expressions are one of the most important sources of information about another’s emotional states. More recently, other cues such as body posture have been shown to influence how facial expressions are perceived. It has been argued that this biasing effect is underpinned by an early integration of visual information from facial expression and body posture. Here, we replicate this biasing effect, but, using a psychophysical procedure, show that adaptation to facial expressions is unaffected by body context. The integration of face and body information therefore occurs downstream of the sites of adaptation, known to be localised in high-level visual areas of the temporal lobe. Contrary to previous research, our findings thus provide direct evidence for late integration of information from facial expression and body posture. They are consistent with a hierarchical model of social perception, in which social signals from different sources are initially processed independently and in parallel by specialised visual mechanisms. Integration of these different inputs in later stages of the visual system then supports the emergence of the integrated whole-person percept that is consciously experienced.
Collapse
Affiliation(s)
- Christoph Teufel
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK.
| | - Meryl F Westlake
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| | - Paul C Fletcher
- Department of Psychiatry, University of Cambridge and Cambridgeshire and Peterborough NHS Foundation Trust, UK
| | - Elisabeth von dem Hagen
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| |
Collapse
|
21
|
Theta and Alpha Oscillations Are Traveling Waves in the Human Neocortex. Neuron 2018; 98:1269-1281.e4. [PMID: 29887341 DOI: 10.1016/j.neuron.2018.05.019] [Citation(s) in RCA: 151] [Impact Index Per Article: 25.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 03/30/2018] [Accepted: 05/11/2018] [Indexed: 12/12/2022]
Abstract
Human cognition requires the coordination of neural activity across widespread brain networks. Here, we describe a new mechanism for large-scale coordination in the human brain: traveling waves of theta and alpha oscillations. Examining direct brain recordings from neurosurgical patients performing a memory task, we found contiguous clusters of cortex in individual patients with oscillations at specific frequencies within 2 to 15 Hz. These oscillatory clusters displayed spatial phase gradients, indicating that they formed traveling waves that propagated at ∼0.25-0.75 m/s. Traveling waves were relevant behaviorally because their propagation correlated with task events and was more consistent when subjects performed the task well. Human traveling theta and alpha waves can be modeled by a network of coupled oscillators because the direction of wave propagation correlated with the spatial orientation of local frequency gradients. Our findings suggest that oscillations support brain connectivity by organizing neural processes across space and time.
Collapse
|
22
|
Reed CL, Bukach CM, Garber M, McIntosh DN. It's Not All About the Face: Variability Reveals Asymmetric Obligatory Processing of Faces and Bodies in Whole-Body Contexts. Perception 2018; 47:626-646. [PMID: 29665729 DOI: 10.1177/0301006618771270] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Researchers have sought to understand the specialized processing of faces and bodies in isolation, but recently they have considered how face and body information interact within the context of the whole body. Although studies suggest that face and body information can be integrated, it remains an open question whether this integration is obligatory and whether contributions of face and body information are symmetrical. In a selective attention task with whole-body stimuli, we focused attention on either the face or body and tested whether variation in the irrelevant part could be ignored. We manipulated orientation to determine the extent to which inversion disrupted obligatory face and body processing. Obligatory processing was evidenced as performance changes in discrimination that depended on stimulus orientation when the irrelevant region varied. For upright but not inverted face discrimination, participants could not ignore body posture variation, even when it was not diagnostic to the task. However, participants could ignore face variation for upright body posture discrimination but not for inverted posture discrimination. The extent to which face and body information necessarily influence each other in whole-body contexts appears to depend on both domain-general attentional and face- or body-specific holistic processing mechanisms.
Collapse
|
23
|
Chiou R, Lambon Ralph MA. The anterior-ventrolateral temporal lobe contributes to boosting visual working memory capacity for items carrying semantic information. Neuroimage 2017; 169:453-461. [PMID: 29289617 PMCID: PMC5864511 DOI: 10.1016/j.neuroimage.2017.12.085] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Revised: 12/21/2017] [Accepted: 12/27/2017] [Indexed: 11/29/2022] Open
Abstract
Working memory (WM) is a buffer that temporarily maintains information, be it visual or auditory, in an active state, caching its contents for online rehearsal or manipulation. How the brain enables long-term semantic knowledge to affect the WM buffer is a theoretically significant issue awaiting further investigation. In the present study, we capitalise on the knowledge about famous individuals as a ‘test-case’ to study how it impinges upon WM capacity for human faces and its neural substrate. Using continuous theta-burst transcranial stimulation combined with a psychophysical task probing WM storage for varying contents, we provide compelling evidence that (1) faces (regardless of familiarity) continued to accrue in the WM buffer with longer encoding time, whereas for meaningless stimuli (colour shades) there was little increment; (2) the rate of WM accrual was significantly more efficient for famous faces, compared to unknown faces; (3) the right anterior-ventrolateral temporal lobe (ATL) causally mediated this superior WM storage for famous faces. Specifically, disrupting the ATL (a region tuned to semantic knowledge including person identity) selectively hinders WM accrual for celebrity faces while leaving the accrual for unfamiliar faces intact. Further, this ‘semantically-accelerated’ storage is impervious to disruption of the right middle frontal gyrus and vertex, supporting the specific and causative contribution of the right ATL. Our finding advances the understanding of the neural architecture of WM, demonstrating that it depends on interaction with long-term semantic knowledge underpinned by the ATL, which causally expands the WM buffer when visual content carries semantic information.
Collapse
Affiliation(s)
- Rocco Chiou
- The Neuroscience and Aphasia Research Unit (NARU), Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, UK.
| | - Matthew A Lambon Ralph
- The Neuroscience and Aphasia Research Unit (NARU), Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, UK.
| |
Collapse
|
24
|
Kaiser D, Peelen MV. Transformation from independent to integrative coding of multi-object arrangements in human visual cortex. Neuroimage 2017; 169:334-341. [PMID: 29277645 DOI: 10.1016/j.neuroimage.2017.12.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 10/08/2017] [Accepted: 12/20/2017] [Indexed: 10/18/2022] Open
Abstract
To optimize processing, the human visual system utilizes regularities present in naturalistic visual input. One of these regularities is the relative position of objects in a scene (e.g., a sofa in front of a television), with behavioral research showing that regularly positioned objects are easier to perceive and to remember. Here we use fMRI to test how positional regularities are encoded in the visual system. Participants viewed pairs of objects that formed minimalistic two-object scenes (e.g., a "living room" consisting of a sofa and television) presented in their regularly experienced spatial arrangement or in an irregular arrangement (with interchanged positions). Additionally, single objects were presented centrally and in isolation. Multi-voxel activity patterns evoked by the object pairs were modeled as the average of the response patterns evoked by the two single objects forming the pair. In two experiments, this approximation in object-selective cortex was significantly less accurate for the regularly than the irregularly positioned pairs, indicating integration of individual object representations. More detailed analysis revealed a transition from independent to integrative coding along the posterior-anterior axis of the visual cortex, with the independent component (but not the integrative component) being almost perfectly predicted by object selectivity across the visual hierarchy. These results reveal a transitional stage between individual object and multi-object coding in visual cortex, providing a possible neural correlate of efficient processing of regularly positioned objects in natural scenes.
Collapse
Affiliation(s)
- Daniel Kaiser
- Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, TN, Italy; Department of Education and Psychology, Freie Universität Berlin, 14195, Berlin-Dahlem, Germany.
| | - Marius V Peelen
- Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, TN, Italy; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
25
|
Phillips JS, Das SR, McMillan CT, Irwin DJ, Roll EE, Da Re F, Nasrallah IM, Wolk DA, Grossman M. Tau PET imaging predicts cognition in atypical variants of Alzheimer's disease. Hum Brain Mapp 2017; 39:691-708. [PMID: 29105977 DOI: 10.1002/hbm.23874] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 09/15/2017] [Accepted: 10/23/2017] [Indexed: 12/26/2022] Open
Abstract
Accumulation of paired helical filament tau contributes to neurodegeneration in Alzheimer's disease (AD). 18 F-flortaucipir is a positron emission tomography (PET) radioligand sensitive to tau in AD, but its clinical utility will depend in part on its ability to predict cognitive symptoms in diverse dementia phenotypes associated with selective, regional uptake. We examined associations between 18 F-flortaucipir and cognition in 14 mildly-impaired patients (12 with cerebrospinal fluid analytes consistent with AD pathology) who had amnestic (n = 5) and non-amnestic AD syndromes, including posterior cortical atrophy (PCA, n = 5) and logopenic-variant primary progressive aphasia (lvPPA, n = 4). Amnestic AD patients had deficits in memory; lvPPA in language; and both amnestic AD and PCA patients in visuospatial function. Associations with cognition were tested using sparse regression and compared to associations in anatomical regions-of-interest (ROIs). 18 F-flortaucipir uptake was expected to show regionally-specific correlations with each domain. In multivariate analyses, uptake was elevated in neocortical areas specifically associated with amnestic and non-amnestic syndromes. Uptake in left anterior superior temporal gyrus accounted for 67% of the variance in language performance. Uptake in right lingual gyrus predicted 85% of the variance in visuospatial performance. Memory was predicted by uptake in right fusiform gyrus and cuneus as well as a cluster comprising right anterior hippocampus and amygdala; this eigenvector explained 57% of the variance in patients' scores. These results provide converging evidence for associations between 18 F-flortaucipir uptake, tau pathology, and patients' cognitive symptoms.
Collapse
Affiliation(s)
- Jeffrey S Phillips
- Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Sandhitsu R Das
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Corey T McMillan
- Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - David J Irwin
- Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Emily E Roll
- Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Fulvio Da Re
- Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104.,PhD Program in Neuroscience, University of Milano-Bicocca, Milan, Italy.,School of Medicine and Surgery, Milan Center for Neuroscience (NeuroMI), University of Milano-Bicocca, Milan, Italy
| | - Ilya M Nasrallah
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - David A Wolk
- Penn Memory Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Murray Grossman
- Penn Frontotemporal Degeneration Center, Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| |
Collapse
|
26
|
Interoceptive signals impact visual processing: Cardiac modulation of visual body perception. Neuroimage 2017; 158:176-185. [DOI: 10.1016/j.neuroimage.2017.06.064] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 06/19/2017] [Accepted: 06/22/2017] [Indexed: 11/19/2022] Open
|
27
|
Johnstone LT, Downing PE. Dissecting the visual perception of body shape with the Garner selective attention paradigm. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1334733] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Leah T. Johnstone
- School of Psychology, Bangor University, Bangor, UK
- School of Psychology, University of East Anglia, Norwich, UK
| | | |
Collapse
|
28
|
Joyal M, Brambati SM, Laforce RJ, Montembeault M, Boukadi M, Rouleau I, Macoir J, Joubert S, Fecteau S, Wilson MA. The Role of the Left Anterior Temporal Lobe for Unpredictable and Complex Mappings in Word Reading. Front Psychol 2017; 8:517. [PMID: 28424650 PMCID: PMC5380751 DOI: 10.3389/fpsyg.2017.00517] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 03/21/2017] [Indexed: 11/13/2022] Open
Abstract
The anterior temporal lobes (ATLs) have been consistently associated with semantic processing which, in turn, has a key role in reading aloud single words. This study aimed to investigate (1) the reading abilities in patients with the semantic variant of primary progressive aphasia (svPPA), and (2) the relationship between gray matter (GM) volume of the left ATL and word reading performance using voxel-based morphometry (VBM). Three groups of participants (svPPA, Alzheimer’s Disease, AD and healthy elderly adults) performed a reading task with exception words, regular words and pseudowords, along with a structural magnetic resonance imaging scan. For exception words, the svPPA group had a lower accuracy and a greater number of regularization errors as compared to the control groups of healthy participants and AD patients. Similarly, for regular words, svPPA patients had a lower accuracy in comparison with AD patients, and a greater number of errors related to complex orthography-to-phonology mappings (OPM) in comparison to both control groups. VBM analyses revealed that GM volume of the left ATL was associated with the number of regularization errors. Also, GM volume of the left lateral ATL was associated with the number of errors with complex OPM during regular word reading. Our results suggest that the left ATL might play a role in the reading of exception words, in accordance with its role in semantic processing. Results further support the involvement of the left lateral ATL in combinatorial processes, including the integration of semantic and phonological information, for both exception and regular words.
Collapse
Affiliation(s)
- Marilyne Joyal
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec and Département de Réadaptation, Université Laval, Québec CityQC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale and Département de Réadaptation, Université Laval, Québec CityQC, Canada
| | - Simona M Brambati
- Centre de Recherche de l'Institut Universitaire de Gériatrie and Département de Psychologie, Université de Montréal, MontréalQC, Canada
| | - Robert J Laforce
- Clinique Interdisciplinaire de Mémoire, Centre Hospitalier Universitaire de Québec and Département des Sciences Neurologiques, Université Laval, Québec CityQC, Canada
| | - Maxime Montembeault
- Centre de Recherche de l'Institut Universitaire de Gériatrie and Département de Psychologie, Université de Montréal, MontréalQC, Canada
| | - Mariem Boukadi
- Centre de Recherche de l'Institut Universitaire de Gériatrie and Département de Psychologie, Université de Montréal, MontréalQC, Canada
| | - Isabelle Rouleau
- Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Axe Neurosciences et Département de Psychologie, Université du Québec à Montréal, MontréalQC, Canada
| | - Joël Macoir
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec and Département de Réadaptation, Université Laval, Québec CityQC, Canada
| | - Sven Joubert
- Centre de Recherche de l'Institut Universitaire de Gériatrie and Département de Psychologie, Université de Montréal, MontréalQC, Canada
| | - Shirley Fecteau
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec and Département de Réadaptation, Université Laval, Québec CityQC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale and Département de Réadaptation, Université Laval, Québec CityQC, Canada
| | - Maximiliano A Wilson
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec and Département de Réadaptation, Université Laval, Québec CityQC, Canada
| |
Collapse
|
29
|
Premereur E, Taubert J, Janssen P, Vogels R, Vanduffel W. Effective Connectivity Reveals Largely Independent Parallel Networks of Face and Body Patches. Curr Biol 2016; 26:3269-3279. [DOI: 10.1016/j.cub.2016.09.059] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Revised: 09/04/2016] [Accepted: 09/28/2016] [Indexed: 10/20/2022]
|