1
|
Rolls ET, Feng J, Zhang R. Selective activations and functional connectivities to the sight of faces, scenes, body parts and tools in visual and non-visual cortical regions leading to the human hippocampus. Brain Struct Funct 2024; 229:1471-1493. [PMID: 38839620 PMCID: PMC11176242 DOI: 10.1007/s00429-024-02811-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 05/22/2024] [Indexed: 06/07/2024]
Abstract
Connectivity maps are now available for the 360 cortical regions in the Human Connectome Project Multimodal Parcellation atlas. Here we add function to these maps by measuring selective fMRI activations and functional connectivity increases to stationary visual stimuli of faces, scenes, body parts and tools from 956 HCP participants. Faces activate regions in the ventrolateral visual cortical stream (FFC), in the superior temporal sulcus (STS) visual stream for face and head motion; and inferior parietal visual (PGi) and somatosensory (PF) regions. Scenes activate ventromedial visual stream VMV and PHA regions in the parahippocampal scene area; medial (7m) and lateral parietal (PGp) regions; and the reward-related medial orbitofrontal cortex. Body parts activate the inferior temporal cortex object regions (TE1p, TE2p); but also visual motion regions (MT, MST, FST); and the inferior parietal visual (PGi, PGs) and somatosensory (PF) regions; and the unpleasant-related lateral orbitofrontal cortex. Tools activate an intermediate ventral stream area (VMV3, VVC, PHA3); visual motion regions (FST); somatosensory (1, 2); and auditory (A4, A5) cortical regions. The findings add function to cortical connectivity maps; and show how stationary visual stimuli activate other cortical regions related to their associations, including visual motion, somatosensory, auditory, semantic, and orbitofrontal cortex value-related, regions.
Collapse
Affiliation(s)
- Edmund T Rolls
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China.
- Oxford Centre for Computational Neuroscience, Oxford, UK.
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai, 200403, China
| | - Ruohan Zhang
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
| |
Collapse
|
2
|
Kosakowski HL, Cohen MA, Herrera L, Nichoson I, Kanwisher N, Saxe R. Cortical Face-Selective Responses Emerge Early in Human Infancy. eNeuro 2024; 11:ENEURO.0117-24.2024. [PMID: 38871455 PMCID: PMC11258539 DOI: 10.1523/eneuro.0117-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 05/28/2024] [Accepted: 06/04/2024] [Indexed: 06/15/2024] Open
Abstract
In human adults, multiple cortical regions respond robustly to faces, including the occipital face area (OFA) and fusiform face area (FFA), implicated in face perception, and the superior temporal sulcus (STS) and medial prefrontal cortex (MPFC), implicated in higher-level social functions. When in development, does face selectivity arise in each of these regions? Here, we combined two awake infant functional magnetic resonance imaging (fMRI) datasets to create a sample size twice the size of previous reports (n = 65 infants; 2.6-9.6 months). Infants watched movies of faces, bodies, objects, and scenes, while fMRI data were collected. Despite variable amounts of data from each infant, individual subject whole-brain activation maps revealed responses to faces compared to nonface visual categories in the approximate location of OFA, FFA, STS, and MPFC. To determine the strength and nature of face selectivity in these regions, we used cross-validated functional region of interest analyses. Across this larger sample size, face responses in OFA, FFA, STS, and MPFC were significantly greater than responses to bodies, objects, and scenes. Even the youngest infants (2-5 months) showed significantly face-selective responses in FFA, STS, and MPFC, but not OFA. These results demonstrate that face selectivity is present in multiple cortical regions within months of birth, providing powerful constraints on theories of cortical development.
Collapse
Affiliation(s)
- Heather L Kosakowski
- Department of Psychology, Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138
| | - Michael A Cohen
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Psychology and Program in Neuroscience, Amherst College, Amherst, Massachusetts 01002
| | - Lyneé Herrera
- Psychology Department, University of Denver, Denver, Colorado 80210
| | - Isabel Nichoson
- Tulane Brain Institute, Tulane University, New Orleans, Louisiana 70118
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| |
Collapse
|
3
|
Laukka P, Månsson KNT, Cortes DS, Manzouri A, Frick A, Fredborg W, Fischer H. Neural correlates of individual differences in multimodal emotion recognition ability. Cortex 2024; 175:1-11. [PMID: 38691922 DOI: 10.1016/j.cortex.2024.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.
Collapse
Affiliation(s)
- Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Kristoffer N T Månsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Amirhossein Manzouri
- Department of Psychology, Stockholm University, Stockholm, Sweden; Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Frick
- Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
| | - William Fredborg
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden; Stockholm University Brain Imaging Centre (SUBIC), Stockholm University, Stockholm, Sweden; Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden
| |
Collapse
|
4
|
Kausel L, Michon M, Soto-Icaza P, Aboitiz F. A multimodal interface for speech perception: the role of the left superior temporal sulcus in social cognition and autism. Cereb Cortex 2024; 34:84-93. [PMID: 38696598 DOI: 10.1093/cercor/bhae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/17/2024] [Accepted: 02/03/2024] [Indexed: 05/04/2024] Open
Abstract
Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.
Collapse
Affiliation(s)
- Leonie Kausel
- Centro de Estudios en Neurociencia Humana y Neuropsicología (CENHN), Facultad de Psicología, Universidad Diego Portales, Chile, Vergara 275, 8370076 Santiago, Chile
| | - Maëva Michon
- Praxiling Laboratory, Joint Research Unit (UMR 5267), Centre National de la Recherche Scientifique (CNRS), Université Paul Valéry, Montpellier, France, Route de Mende, 34199 Montpellier cedex 5, France
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| | - Patricia Soto-Icaza
- Centro de Investigación en Complejidad Social (CICS), Facultad de Gobierno, Universidad del Desarrollo, Chile, Av. Las Condes 12461, edificio 3, piso 3, 7590943, Las Condes Santiago, Chile
| | - Francisco Aboitiz
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| |
Collapse
|
5
|
Rolls ET. Two what, two where, visual cortical streams in humans. Neurosci Biobehav Rev 2024; 160:105650. [PMID: 38574782 DOI: 10.1016/j.neubiorev.2024.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/25/2024] [Accepted: 03/31/2024] [Indexed: 04/06/2024]
Abstract
ROLLS, E. T. Two What, Two Where, Visual Cortical Streams in Humans. NEUROSCI BIOBEHAV REV 2024. Recent cortical connectivity investigations lead to new concepts about 'What' and 'Where' visual cortical streams in humans, and how they connect to other cortical systems. A ventrolateral 'What' visual stream leads to the inferior temporal visual cortex for object and face identity, and provides 'What' information to the hippocampal episodic memory system, the anterior temporal lobe semantic system, and the orbitofrontal cortex emotion system. A superior temporal sulcus (STS) 'What' visual stream utilising connectivity from the temporal and parietal visual cortex responds to moving objects and faces, and face expression, and connects to the orbitofrontal cortex for emotion and social behaviour. A ventromedial 'Where' visual stream builds feature combinations for scenes, and provides 'Where' inputs via the parahippocampal scene area to the hippocampal episodic memory system that are also useful for landmark-based navigation. The dorsal 'Where' visual pathway to the parietal cortex provides for actions in space, but also provides coordinate transforms to provide inputs to the parahippocampal scene area for self-motion update of locations in scenes in the dark or when the view is obscured.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China.
| |
Collapse
|
6
|
Zhang Y, Zhang H, Fu S. Relative saliency affects attentional capture and suppression of color and face singleton distractors: evidence from event-related potential studies. Cereb Cortex 2024; 34:bhae176. [PMID: 38679483 DOI: 10.1093/cercor/bhae176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 03/30/2024] [Accepted: 04/09/2024] [Indexed: 05/01/2024] Open
Abstract
Prior research has yet to fully elucidate the impact of varying relative saliency between target and distractor on attentional capture and suppression, along with their underlying neural mechanisms, especially when social (e.g. face) and perceptual (e.g. color) information interchangeably serve as singleton targets or distractors, competing for attention in a search array. Here, we employed an additional singleton paradigm to investigate the effects of relative saliency on attentional capture (as assessed by N2pc) and suppression (as assessed by PD) of color or face singleton distractors in a visual search task by recording event-related potentials. We found that face singleton distractors with higher relative saliency induced stronger attentional processing. Furthermore, enhancing the physical salience of colors using a bold color ring could enhance attentional processing toward color singleton distractors. Reducing the physical salience of facial stimuli by blurring weakened attentional processing toward face singleton distractors; however, blurring enhanced attentional processing toward color singleton distractors because of the change in relative saliency. In conclusion, the attentional processes of singleton distractors are affected by their relative saliency to singleton targets, with higher relative saliency of singleton distractors resulting in stronger attentional capture and suppression; faces, however, exhibit some specificity in attentional capture and suppression due to high social saliency.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 230 Wai Huan Xi Road, Guangzhou Higher Education Mega Center, Guangzhou 510006, China
| | - Hai Zhang
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 230 Wai Huan Xi Road, Guangzhou Higher Education Mega Center, Guangzhou 510006, China
| | - Shimin Fu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 230 Wai Huan Xi Road, Guangzhou Higher Education Mega Center, Guangzhou 510006, China
| |
Collapse
|
7
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
8
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
9
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
10
|
Breu MS, Ramezanpour H, Dicke PW, Thier P. A frontoparietal network for volitional control of gaze following. Eur J Neurosci 2023; 57:1723-1735. [PMID: 36967647 DOI: 10.1111/ejn.15975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 03/13/2023] [Accepted: 03/22/2023] [Indexed: 03/29/2023]
Abstract
Gaze following is a major element of non-verbal communication and important for successful social interactions. Human gaze following is a fast and almost reflex-like behaviour, yet it can be volitionally controlled and suppressed to some extent if inappropriate or unnecessary, given the social context. In order to identify the neural basis of the cognitive control of gaze following, we carried out an event-related fMRI experiment, in which human subjects' eye movements were tracked while they were exposed to gaze cues in two distinct contexts: A baseline gaze following condition in which subjects were instructed to use gaze cues to shift their attention to a gazed-at spatial target and a control condition in which the subjects were required to ignore the gaze cue and instead to shift their attention to a distinct spatial target to be selected based on a colour mapping rule, requiring the suppression of gaze following. We could identify a suppression-related blood-oxygen-level-dependent (BOLD) response in a frontoparietal network comprising dorsolateral prefrontal cortex (dlPFC), orbitofrontal cortex (OFC), the anterior insula, precuneus, and posterior parietal cortex (PPC). These findings suggest that overexcitation of frontoparietal circuits in turn suppressing the gaze following patch might be a potential cause of gaze following deficits in clinical populations.
Collapse
Affiliation(s)
- M S Breu
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - H Ramezanpour
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - P W Dicke
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - P Thier
- Cognitive Neurology Laboratory, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
11
|
Zhou Q, Du J, Gao R, Hu S, Yu T, Wang Y, Pan NC. Discriminative neural pathways for perception-cognition activity of color and face in the human brain. Cereb Cortex 2023; 33:1972-1984. [PMID: 35580851 DOI: 10.1093/cercor/bhac186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 04/22/2022] [Accepted: 04/24/2022] [Indexed: 11/13/2022] Open
Abstract
Human performance can be examined using a visual lens. The identification of psychophysical colors and emotional faces with perceptual visual pathways may remain invalid for simple detection tasks. In particular, how the visual dorsal and ventral processing streams handle discriminative visual perceptions and subsequent cognition activities are obscure. We explored these issues using stereoelectroencephalography recordings, which were obtained from patients with pharmacologically resistant epilepsy. Delayed match-to-sample paradigms were used for analyzing the processing of simple colors and complex emotional faces in the human brain. We showed that the angular-cuneus gyrus acts as a pioneer in discriminating the 2 features, and dorsal regions, including the middle frontal gyrus (MFG) and postcentral gyrus, as well as ventral regions, such as the middle temporal gyrus (MTG) and posterior superior temporal sulcus (pSTS), were involved in processing incongruent colors and faces. Critically, the beta and gamma band activities between the cuneus and MTG and between the cuneus and pSTS would tune a separate pathway of incongruency processing. In addition, posterior insular gyrus, fusiform, and MFG were found for attentional modulation of the 2 features via alpha band activities. These findings suggest the neural basis of the discriminative pathways of perception-cognition activities in the human brain.
Collapse
Affiliation(s)
- Qilin Zhou
- Department of Neurology, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China.,Beijing Key Laboratory of Neuromodulation, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China
| | - Jialin Du
- Department of Pharmacy Phase I Clinical Trial Center, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China
| | - Runshi Gao
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China
| | - Shimin Hu
- Department of Neurology, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China.,Beijing Key Laboratory of Neuromodulation, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China
| | - Tao Yu
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China
| | - Yuping Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China.,Beijing Key Laboratory of Neuromodulation, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China.,Institute of sleep and consciousness disorders, Center of Epilepsy, Beijing Institute for Brain Disorders, Capital Medical University, No. 10, Xi Tou Tiao, Youanmen wai, Fengtai District, Beijing, 100069, China
| | - Na Clara Pan
- Department of Neurology, Xuanwu Hospital, Capital Medical University, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China.,Beijing Key Laboratory of Neuromodulation, No. 45, Changchun Street, Xicheng District, Beijing, 100053, China
| |
Collapse
|
12
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
13
|
Saalasti S, Alho J, Lahnakoski JM, Bacha-Trams M, Glerean E, Jääskeläinen IP, Hasson U, Sams M. Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading. Brain Behav 2023; 13:e2869. [PMID: 36579557 PMCID: PMC9927859 DOI: 10.1002/brb3.2869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 12/06/2022] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. METHODS We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6-100%, mean = 50.7%). The participants also listened to and read the same narrative. The similarity between individual participants' brain activity during the whole narrative, within and between conditions, was estimated by a voxel-wise comparison of the Blood Oxygenation Level Dependent (BOLD) signal time courses. RESULTS Inter-subject correlation (ISC) of the time courses revealed that lipreading, listening to, and reading the narrative were largely supported by the same brain areas in the temporal, parietal and frontal cortices, precuneus, and cerebellum. Additionally, listening to and reading connected naturalistic speech particularly activated higher-level linguistic processing in the parietal and frontal cortices more consistently than lipreading, probably paralleling the limited understanding obtained via lip-reading. Importantly, higher lipreading test score and subjective estimate of comprehension of the lipread narrative was associated with activity in the superior and middle temporal cortex. CONCLUSIONS Our new data illustrates that findings from prior studies using well-controlled repetitive speech stimuli and stimulus-driven data analyses are also valid for naturalistic connected speech. Our results might suggest an efficient use of brain areas dealing with phonological processing in skilled lipreaders.
Collapse
Affiliation(s)
- Satu Saalasti
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Advanced Magnetic Imaging (AMI) Centre, Aalto NeuroImaging, School of Science, Aalto University, Espoo, Finland
| | - Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Juha M Lahnakoski
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Center Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Mareike Bacha-Trams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Enrico Glerean
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto Studios - MAGICS, Aalto University, Espoo, Finland
| |
Collapse
|
14
|
Takahashi Y, Murata S, Ueki M, Tomita H, Yamashita Y. Interaction between Functional Connectivity and Neural Excitability in Autism: A Novel Framework for Computational Modeling and Application to Biological Data. COMPUTATIONAL PSYCHIATRY (CAMBRIDGE, MASS.) 2023; 7:14-29. [PMID: 38774640 PMCID: PMC11104370 DOI: 10.5334/cpsy.93] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 01/09/2023] [Indexed: 01/22/2023]
Abstract
Functional connectivity (FC) and neural excitability may interact to affect symptoms of autism spectrum disorder (ASD). We tested this hypothesis with neural network simulations, and applied it with functional magnetic resonance imaging (fMRI). A hierarchical recurrent neural network embodying predictive processing theory was subjected to a facial emotion recognition task. Neural network simulations examined the effects of FC and neural excitability on changes in neural representations by developmental learning, and eventually on ASD-like performance. Next, by mapping each neural network condition to subject subgroups on the basis of fMRI parameters, the association between ASD-like performance in the simulation and ASD diagnosis in the corresponding subject subgroup was examined. In the neural network simulation, the more homogeneous the neural excitability of the lower-level network, the more ASD-like the performance (reduced generalization and emotion recognition capability). In addition, in homogeneous networks, the higher the FC, the more ASD-like performance, while in heterogeneous networks, the higher the FC, the less ASD-like performance, demonstrating that FC and neural excitability interact. As an underlying mechanism, neural excitability determines the generalization capability of top-down prediction, and FC determines whether the model's information processing will be top-down prediction-dependent or bottom-up sensory-input dependent. In fMRI datasets, ASD was actually more prevalent in subject subgroups corresponding to the network condition showing ASD-like performance. The current study suggests an interaction between FC and neural excitability, and presents a novel framework for computational modeling and biological application of a developmental learning process underlying cognitive alterations in ASD.
Collapse
Affiliation(s)
- Yuta Takahashi
- Department of Psychiatry, Tohoku University Hospital, Japan
- Department of Psychiatry, Graduate School of Medicine, Tohoku University, Japan
- Department of Information Medicine, National Center of Neurology and Psychiatry, Japan
| | - Shingo Murata
- Department of Electronics and Electrical Engineering, Faculty of Science and Technology, Keio University, Japan
| | - Masao Ueki
- School of Information and Data Sciences, Nagasaki University, Japan
| | - Hiroaki Tomita
- Department of Psychiatry, Tohoku University Hospital, Japan
- Department of Psychiatry, Graduate School of Medicine, Tohoku University, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Center of Neurology and Psychiatry, Japan
| |
Collapse
|
15
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
16
|
Amlerova J, Laczó J, Nedelska Z, Laczó M, Vyhnálek M, Zhang B, Sheardova K, Angelucci F, Andel R, Hort J. Emotional prosody recognition is impaired in Alzheimer’s disease. Alzheimers Res Ther 2022; 14:50. [PMID: 35382868 PMCID: PMC8985328 DOI: 10.1186/s13195-022-00989-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 03/10/2022] [Indexed: 11/17/2022]
Abstract
Background The ability to understand emotions is often disturbed in patients with cognitive impairments. Right temporal lobe structures play a crucial role in emotional processing, especially the amygdala, temporal pole (TP), superior temporal sulcus (STS), and anterior cingulate (AC). Those regions are affected in early stages of Alzheimer´s disease (AD). The aim of our study was to evaluate emotional prosody recognition (EPR) in participants with amnestic mild cognitive impairment (aMCI) due to AD, AD dementia patients, and cognitively healthy controls and to measure volumes or thickness of the brain structures involved in this process. In addition, we correlated EPR score to cognitive impairment as measured by MMSE. The receiver operating characteristic (ROC) analysis was used to assess the ability of EPR tests to differentiate the control group from the aMCI and dementia groups. Methods Eighty-nine participants from the Czech Brain Aging Study: 43 aMCI due to AD, 36 AD dementia, and 23 controls, underwent Prosody Emotional Recognition Test. This experimental test included the playback of 25 sentences with neutral meaning each recorded with different emotional prosody (happiness, sadness, fear, disgust, anger). Volume of the amygdala and thickness of the TP, STS, and rostral and caudal parts of AC (RAC and CAC) were measured using FreeSurfer algorithm software. ANCOVA was used to evaluate EPR score differences. ROC analysis was used to assess the ability of EPR test to differentiate the control group from the aMCI and dementia groups. The Pearson’s correlation coefficients were calculated to explore relationships between EPR scores, structural brain measures, and MMSE. Results EPR was lower in the dementia and aMCI groups compared with controls. EPR total score had high sensitivity in distinguishing between not only controls and patients, but also controls and aMCI, controls and dementia, and aMCI and dementia. EPR decreased with disease severity as it correlated with MMSE. There was a significant positive correlation of EPR and thickness of the right TP, STS, and bilateral RAC. Conclusions EPR is impaired in AD dementia and aMCI due to AD. These data suggest that the broad range of AD symptoms may include specific deficits in the emotional sphere which further complicate the patient’s quality of life.
Collapse
|
17
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
18
|
Michon M, Zamorano-Abramson J, Aboitiz F. Faces and Voices Processing in Human and Primate Brains: Rhythmic and Multimodal Mechanisms Underlying the Evolution and Development of Speech. Front Psychol 2022; 13:829083. [PMID: 35432052 PMCID: PMC9007199 DOI: 10.3389/fpsyg.2022.829083] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 03/07/2022] [Indexed: 11/24/2022] Open
Abstract
While influential works since the 1970s have widely assumed that imitation is an innate skill in both human and non-human primate neonates, recent empirical studies and meta-analyses have challenged this view, indicating other forms of reward-based learning as relevant factors in the development of social behavior. The visual input translation into matching motor output that underlies imitation abilities instead seems to develop along with social interactions and sensorimotor experience during infancy and childhood. Recently, a new visual stream has been identified in both human and non-human primate brains, updating the dual visual stream model. This third pathway is thought to be specialized for dynamics aspects of social perceptions such as eye-gaze, facial expression and crucially for audio-visual integration of speech. Here, we review empirical studies addressing an understudied but crucial aspect of speech and communication, namely the processing of visual orofacial cues (i.e., the perception of a speaker's lips and tongue movements) and its integration with vocal auditory cues. Along this review, we offer new insights from our understanding of speech as the product of evolution and development of a rhythmic and multimodal organization of sensorimotor brain networks, supporting volitional motor control of the upper vocal tract and audio-visual voices-faces integration.
Collapse
Affiliation(s)
- Maëva Michon
- Laboratory for Cognitive and Evolutionary Neuroscience, Department of Psychiatry, Faculty of Medicine, Interdisciplinary Center for Neuroscience, Pontificia Universidad Católica de Chile, Santiago, Chile
- Centro de Estudios en Neurociencia Humana y Neuropsicología, Facultad de Psicología, Universidad Diego Portales, Santiago, Chile
| | - José Zamorano-Abramson
- Centro de Investigación en Complejidad Social, Facultad de Gobierno, Universidad del Desarrollo, Santiago, Chile
| | - Francisco Aboitiz
- Laboratory for Cognitive and Evolutionary Neuroscience, Department of Psychiatry, Faculty of Medicine, Interdisciplinary Center for Neuroscience, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
19
|
Staib M, Frühholz S. Distinct functional levels of human voice processing in the auditory cortex. Cereb Cortex 2022; 33:1170-1185. [PMID: 35348635 PMCID: PMC9930621 DOI: 10.1093/cercor/bhac128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 02/03/2022] [Accepted: 03/07/2022] [Indexed: 11/12/2022] Open
Abstract
Voice signaling is integral to human communication, and a cortical voice area seemed to support the discrimination of voices from other auditory objects. This large cortical voice area in the auditory cortex (AC) was suggested to process voices selectively, but its functional differentiation remained elusive. We used neuroimaging while humans processed voices and nonvoice sounds, and artificial sounds that mimicked certain voice sound features. First and surprisingly, specific auditory cortical voice processing beyond basic acoustic sound analyses is only supported by a very small portion of the originally described voice area in higher-order AC located centrally in superior Te3. Second, besides this core voice processing area, large parts of the remaining voice area in low- and higher-order AC only accessorily process voices and might primarily pick up nonspecific psychoacoustic differences between voices and nonvoices. Third, a specific subfield of low-order AC seems to specifically decode acoustic sound features that are relevant but not exclusive for voice detection. Taken together, the previously defined voice area might have been overestimated since cortical support for human voice processing seems rather restricted. Cortical voice processing also seems to be functionally more diverse and embedded in broader functional principles of the human auditory system.
Collapse
Affiliation(s)
- Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, 8050 Zurich, Switzerland
| | - Sascha Frühholz
- Corresponding author: Department of Psychology, University of Zürich, Binzmuhlestrasse 14/18, 8050 Zürich, Switzerland.
| |
Collapse
|
20
|
Wurm MF, Caramazza A. Two 'what' pathways for action and object recognition. Trends Cogn Sci 2021; 26:103-116. [PMID: 34702661 DOI: 10.1016/j.tics.2021.10.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/03/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
The ventral visual stream is conceived as a pathway for object recognition. However, we also recognize the actions an object can be involved in. Here, we show that action recognition critically depends on a pathway in lateral occipitotemporal cortex, partially overlapping and topographically aligned with object representations that are precursors for action recognition. By contrast, object features that are more relevant for object recognition, such as color and texture, are typically found in ventral occipitotemporal cortex. We argue that occipitotemporal cortex contains similarly organized lateral and ventral 'what' pathways for action and object recognition, respectively. This account explains a number of observed phenomena, such as the duplication of object domains and the specific representational profiles in lateral and ventral cortex.
Collapse
Affiliation(s)
- Moritz F Wurm
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy.
| | - Alfonso Caramazza
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy; Department of Psychology, Harvard University, 33 Kirkland St, Cambridge, MA 02138, USA
| |
Collapse
|