1
|
Del Vecchio M, Avanzini P, Gerbella M, Costa S, Zauli FM, d'Orio P, Focacci E, Sartori I, Caruana F. Anatomo-functional basis of emotional and motor resonance elicited by facial expressions. Brain 2024; 147:3018-3031. [PMID: 38365267 DOI: 10.1093/brain/awae050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 12/21/2023] [Accepted: 01/28/2024] [Indexed: 02/18/2024] Open
Abstract
Simulation theories predict that the observation of other's expressions modulates neural activity in the same centres controlling their production. This hypothesis has been developed by two models, postulating that the visual input is directly projected either to the motor system for action recognition (motor resonance) or to emotional/interoceptive regions for emotional contagion and social synchronization (emotional resonance). Here we investigated the role of frontal/insular regions in the processing of observed emotional expressions by combining intracranial recording, electrical stimulation and effective connectivity. First, we intracranially recorded from prefrontal, premotor or anterior insular regions of 44 patients during the passive observation of emotional expressions, finding widespread modulations in prefrontal/insular regions (anterior cingulate cortex, anterior insula, orbitofrontal cortex and inferior frontal gyrus) and motor territories (Rolandic operculum and inferior frontal junction). Subsequently, we electrically stimulated the activated sites, finding that (i) in the anterior cingulate cortex and anterior insula, the stimulation elicited emotional/interoceptive responses, as predicted by the 'emotional resonance model'; (ii) in the Rolandic operculum it evoked face/mouth sensorimotor responses, in line with the 'motor resonance' model; and (iii) all other regions were unresponsive or revealed functions unrelated to the processing of facial expressions. Finally, we traced the effective connectivity to sketch a network-level description of these regions, finding that the anterior cingulate cortex and the anterior insula are reciprocally interconnected while the Rolandic operculum is part of the parieto-frontal circuits and poorly connected with the former. These results support the hypothesis that the pathways hypothesized by the 'emotional resonance' and the 'motor resonance' models work in parallel, differing in terms of spatio-temporal fingerprints, reactivity to electrical stimulation and connectivity patterns.
Collapse
Affiliation(s)
- Maria Del Vecchio
- Institute of Neuroscience, National Research Council of Italy (CNR), 43125 Parma, Italy
| | - Pietro Avanzini
- Institute of Neuroscience, National Research Council of Italy (CNR), 43125 Parma, Italy
| | - Marzio Gerbella
- Department of Medicine and Surgery, University of Parma, 43125 Parma, Italy
| | - Sara Costa
- Department of Medicine and Surgery, University of Parma, 43125 Parma, Italy
| | - Flavia Maria Zauli
- 'Claudio Munari' Epilepsy Surgery Center, ASST GOM Niguarda, 20142 Milan, Italy
| | - Piergiorgio d'Orio
- 'Claudio Munari' Epilepsy Surgery Center, ASST GOM Niguarda, 20142 Milan, Italy
| | - Elena Focacci
- Department of Medicine and Surgery, University of Parma, 43125 Parma, Italy
| | - Ivana Sartori
- 'Claudio Munari' Epilepsy Surgery Center, ASST GOM Niguarda, 20142 Milan, Italy
| | - Fausto Caruana
- Institute of Neuroscience, National Research Council of Italy (CNR), 43125 Parma, Italy
| |
Collapse
|
2
|
Xing F, Sheffield AG, Jadi MP, Chang SWC, Nandy AS. Automated 3D analysis of social head-gaze behaviors in freely moving marmosets. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.16.580693. [PMID: 38405818 PMCID: PMC10888878 DOI: 10.1101/2024.02.16.580693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Social communication relies on the ability to perceive and interpret the direction of others' attention, which is commonly conveyed through head orientation and gaze direction in both humans and non-human primates. However, traditional social gaze experiments in non-human primates require restraining head movements, which significantly limit their natural behavioral repertoire. Here, we developed a novel framework for accurately tracking facial features and three-dimensional head gaze orientations of multiple freely moving common marmosets (Callithrix jacchus). To accurately track the facial features of marmoset dyads in an arena, we adapted computer vision tools using deep learning networks combined with triangulation algorithms applied to the detected facial features to generate dynamic geometric facial frames in 3D space, overcoming common occlusion challenges. Furthermore, we constructed a virtual cone, oriented perpendicular to the facial frame, to model the head gaze directions. Using this framework, we were able to detect different types of interactive social gaze events, including partner-directed gaze and jointly-directed gaze to a shared spatial location. We observed clear effects of sex and familiarity on both interpersonal distance and gaze dynamics in marmoset dyads. Unfamiliar pairs exhibited more stereotyped patterns of arena occupancy, more sustained levels of social gaze across inter-animal distance, and increased gaze monitoring. On the other hand, familiar pairs exhibited higher levels of joint gazes. Moreover, males displayed significantly elevated levels of gazes toward females' faces and the surrounding regions irrespective of familiarity. Our study lays the groundwork for a rigorous quantification of primate behaviors in naturalistic settings.
Collapse
Affiliation(s)
- Feng Xing
- Inderdepartmental Neuroscience Program, Yale University, New Haven, CT
- Department of Neuroscience, Yale University, New Haven, CT
| | - Alec G Sheffield
- Inderdepartmental Neuroscience Program, Yale University, New Haven, CT
- Department of Neuroscience, Yale University, New Haven, CT
- Department of Psychiatry, Yale University, New Haven, CT
| | - Monika P Jadi
- Department of Neuroscience, Yale University, New Haven, CT
- Department of Psychiatry, Yale University, New Haven, CT
- Wu Tsai Institute, Yale University, New Haven, CT
| | - Steve W C Chang
- Department of Neuroscience, Yale University, New Haven, CT
- Department of Psychology, Yale University, New Haven, CT
- Wu Tsai Institute, Yale University, New Haven, CT
- Kavli Institute for Neuroscience, Yale University, New Haven, CT
| | - Anirvan S Nandy
- Department of Neuroscience, Yale University, New Haven, CT
- Department of Psychology, Yale University, New Haven, CT
- Wu Tsai Institute, Yale University, New Haven, CT
- Kavli Institute for Neuroscience, Yale University, New Haven, CT
| |
Collapse
|
3
|
Lombardi G, Sciutti A, Rea F, Vannucci F, Di Cesare G. Humanoid facial expressions as a tool to study human behaviour. Sci Rep 2024; 14:133. [PMID: 38167552 PMCID: PMC10762044 DOI: 10.1038/s41598-023-45825-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/24/2023] [Indexed: 01/05/2024] Open
Abstract
Besides action vitality forms, facial expressions represent another fundamental social cue which enables to infer the affective state of others. In the present study, we proposed the iCub robot as an interactive and controllable agent to investigate whether and how different facial expressions, associated to different action vitality forms, could modulate the motor behaviour of participants. To this purpose, we carried out a kinematic experiment in which 18 healthy participants observed video-clips of the iCub robot performing a rude or gentle request with a happy or angry facial expression. After this request, they were asked to grasp an object and pass it towards the iCub robot. Results showed that the iCub facial expressions significantly modulated participants motor response. Particularly, the observation of a happy facial expression, associated to a rude action, decreased specific kinematic parameters such as velocity, acceleration and maximum height of movement. In contrast, the observation of an angry facial expression, associated to a gentle action, increased the same kinematic parameters. Moreover, a behavioural study corroborated these findings, showing that the perception of the same action vitality form was modified when associated to a positive or negative facial expression.
Collapse
Affiliation(s)
- G Lombardi
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy
| | - A Sciutti
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy
| | - F Rea
- Robotics Brain and Cognitive Sciences Unit, Italian Institute of Technology, Genova, Italy
| | - F Vannucci
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy
| | - G Di Cesare
- Cognitive Architecture for Collaborative Technologies Unit (CONTACT), Italian Institute of Technology, Genova, Italy.
- Department of Medicine and Surgery, Neuroscience Unit, University of Parma, via Volturno 39/E, 43125, Parma, Italy.
| |
Collapse
|
4
|
Ford A, Kovacs-Balint ZA, Wang A, Feczko E, Earl E, Miranda-Domínguez Ó, Li L, Styner M, Fair D, Jones W, Bachevalier J, Sánchez MM. Functional maturation in visual pathways predicts attention to the eyes in infant rhesus macaques: Effects of social status. Dev Cogn Neurosci 2023; 60:101213. [PMID: 36774827 PMCID: PMC9925610 DOI: 10.1016/j.dcn.2023.101213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/31/2023] [Accepted: 02/06/2023] [Indexed: 02/10/2023] Open
Abstract
Differences in looking at the eyes of others are one of the earliest behavioral markers for social difficulties in neurodevelopmental disabilities, including autism. However, it is unknown how early visuo-social experiences relate to the maturation of infant brain networks that process visual social stimuli. We investigated functional connectivity (FC) within the ventral visual object pathway as a contributing neural system. Densely sampled, longitudinal eye-tracking and resting state fMRI (rs-fMRI) data were collected from infant rhesus macaques, an important model of human social development, from birth through 6 months of age. Mean trajectories were fit for both datasets and individual trajectories from subjects with both eye-tracking and rs-fMRI data were used to test for brain-behavior relationships. Exploratory findings showed infants with greater increases in FC between left V1 to V3 visual areas have an earlier increase in eye-looking before 2 months. This relationship was moderated by social status such that infants with low social status had a stronger association between left V1 to V3 connectivity and eye-looking than high status infants. Results indicated that maturation of the visual object pathway may provide an important neural substrate supporting adaptive transitions in social visual attention during infancy.
Collapse
Affiliation(s)
- Aiden Ford
- Neuroscience Program, Emory University, Atlanta, GA, USA; Marcus Autism Center, USA.
| | | | - Arick Wang
- Emory Natl. Primate Res. Ctr., Emory Univ., Atlanta, GA, USA; Dept of Psychology, Emory University, Atlanta, GA, USA
| | - Eric Feczko
- Dept. of Pediatrics, University of Minnesota, Minneapolis, MN, USA; Masonic Institute of the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Eric Earl
- Data Science and Sharing Team, National Institute of Mental Health, NIH, DHHS, Bethesda, MD, USA
| | - Óscar Miranda-Domínguez
- Dept. of Pediatrics, University of Minnesota, Minneapolis, MN, USA; Masonic Institute of the Developing Brain, University of Minnesota, Minneapolis, MN, USA
| | - Longchuan Li
- Marcus Autism Center, USA; Children's Healthcare of Atlanta, GA, USA; Dept. of Pediatrics, Emory University, Sch. of Med., Atlanta, GA, USA
| | - Martin Styner
- Dept. of Psychiatry, Univ. of North Carolina, Chapel Hill, NC, USA
| | - Damien Fair
- Dept. of Pediatrics, University of Minnesota, Minneapolis, MN, USA; Masonic Institute of the Developing Brain, University of Minnesota, Minneapolis, MN, USA; Institute of Child Development, University of Minnesota, Minneapolis, MN, USA; Center for Magnetic Resonance Research and Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Warren Jones
- Marcus Autism Center, USA; Children's Healthcare of Atlanta, GA, USA; Dept. of Pediatrics, Emory University, Sch. of Med., Atlanta, GA, USA
| | - Jocelyne Bachevalier
- Emory Natl. Primate Res. Ctr., Emory Univ., Atlanta, GA, USA; Dept of Psychology, Emory University, Atlanta, GA, USA
| | - Mar M Sánchez
- Emory Natl. Primate Res. Ctr., Emory Univ., Atlanta, GA, USA; Dept. Psychiatry & Behavioral Sciences, Emory Univ., Sch. of Med., Atlanta, GA, USA
| |
Collapse
|
5
|
Russ BE, Koyano KW, Day-Cooney J, Perwez N, Leopold DA. Temporal continuity shapes visual responses of macaque face patch neurons. Neuron 2023; 111:903-914.e3. [PMID: 36630962 PMCID: PMC10023462 DOI: 10.1016/j.neuron.2022.12.021] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 09/09/2022] [Accepted: 12/13/2022] [Indexed: 01/12/2023]
Abstract
Macaque inferior temporal cortex neurons respond selectively to complex visual images, with recent work showing that they are also entrained reliably by the evolving content of natural movies. To what extent does temporal continuity itself shape the responses of high-level visual neurons? We addressed this question by measuring how cells in face-selective regions of the macaque visual cortex were affected by the manipulation of a movie's temporal structure. Sampling a 5-min movie at 1 s intervals, we measured neural responses to randomized, brief stimuli of different lengths, ranging from 800 ms dynamic movie snippets to 100 ms static frames. We found that the disruption of temporal continuity strongly altered neural response profiles, particularly in the early response period after stimulus onset. The results suggest that models of visual system function based on discrete and randomized visual presentations may not translate well to the brain's natural modes of operation.
Collapse
Affiliation(s)
- Brian E Russ
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA; Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA; Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Psychiatry, New York University at Langone, New York City, NY 10016, USA.
| | - Kenji W Koyano
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - Julian Day-Cooney
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - Neda Perwez
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, National Institute of Mental Health, Bethesda, MD 20814, USA; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, Bethesda, MD 20814, USA
| |
Collapse
|
6
|
Disentangling Object Category Representations Driven by Dynamic and Static Visual Input. J Neurosci 2023; 43:621-634. [PMID: 36639892 PMCID: PMC9888510 DOI: 10.1523/jneurosci.0371-22.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 10/01/2022] [Accepted: 10/06/2022] [Indexed: 12/12/2022] Open
Abstract
Humans can label and categorize objects in a visual scene with high accuracy and speed, a capacity well characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of "object kinematograms" to isolate motion-defined signals from other sources of visual information. Object kinematograms were generated by extracting motion information from videos of 6 object categories and applying the motion to limited-lifetime random dot patterns. Using functional magnetic resonance imaging (fMRI) (n = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher-order regions of interest (ROIs). More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition, while more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.SIGNIFICANCE STATEMENT Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that, while higher-order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.
Collapse
|
7
|
Inagaki M, Inoue KI, Tanabe S, Kimura K, Takada M, Fujita I. Rapid processing of threatening faces in the amygdala of nonhuman primates: subcortical inputs and dual roles. Cereb Cortex 2023; 33:895-915. [PMID: 35323915 PMCID: PMC9890477 DOI: 10.1093/cercor/bhac109] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 02/22/2022] [Accepted: 02/22/2022] [Indexed: 02/04/2023] Open
Abstract
A subcortical pathway through the superior colliculus and pulvinar has been proposed to provide the amygdala with rapid but coarse visual information about emotional faces. However, evidence for short-latency, facial expression-discriminating responses from individual amygdala neurons is lacking; even if such a response exists, how it might contribute to stimulus detection is unclear. Also, no definitive anatomical evidence is available for the assumed pathway. Here we showed that ensemble responses of amygdala neurons in monkeys carried robust information about open-mouthed, presumably threatening, faces within 50 ms after stimulus onset. This short-latency signal was not found in the visual cortex, suggesting a subcortical origin. Temporal analysis revealed that the early response contained excitatory and suppressive components. The excitatory component may be useful for sending rapid signals downstream, while the sharpening of the rising phase of later-arriving inputs (presumably from the cortex) by the suppressive component might improve the processing of facial expressions over time. Injection of a retrograde trans-synaptic tracer into the amygdala revealed presumed monosynaptic labeling in the pulvinar and disynaptic labeling in the superior colliculus, including the retinorecipient layers. We suggest that the early amygdala responses originating from the colliculo-pulvino-amygdalar pathway play dual roles in threat detection.
Collapse
Affiliation(s)
- Mikio Inagaki
- Laboratory for Cognitive Neuroscience, Graduate School of Frontier Biosciences, Osaka University, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology and Osaka University, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Ken-ichi Inoue
- Systems Neuroscience Section, Primate Research Institute, Kyoto University, 41-2 Kanrin, Inuyama, Aichi 484-8506, Japan
| | - Soshi Tanabe
- Systems Neuroscience Section, Primate Research Institute, Kyoto University, 41-2 Kanrin, Inuyama, Aichi 484-8506, Japan
| | - Kei Kimura
- Systems Neuroscience Section, Primate Research Institute, Kyoto University, 41-2 Kanrin, Inuyama, Aichi 484-8506, Japan
| | - Masahiko Takada
- Systems Neuroscience Section, Primate Research Institute, Kyoto University, 41-2 Kanrin, Inuyama, Aichi 484-8506, Japan
| | - Ichiro Fujita
- Laboratory for Cognitive Neuroscience, Graduate School of Frontier Biosciences, Osaka University, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology and Osaka University, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
8
|
Liu N, Behrmann M, Turchi JN, Avidan G, Hadj-Bouziane F, Ungerleider LG. Bidirectional and parallel relationships in macaque face circuit revealed by fMRI and causal pharmacological inactivation. Nat Commun 2022; 13:6787. [PMID: 36351907 PMCID: PMC9646786 DOI: 10.1038/s41467-022-34451-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 10/25/2022] [Indexed: 11/11/2022] Open
Abstract
Although the presence of face patches in primate inferotemporal (IT) cortex is well established, the functional and causal relationships among these patches remain elusive. In two monkeys, muscimol was infused sequentially into each patch or pair of patches to assess their respective influence on the remaining IT face network and the amygdala, as determined using fMRI. The results revealed that anterior face patches required input from middle face patches for their responses to both faces and objects, while the face selectivity in middle face patches arose, in part, from top-down input from anterior face patches. Moreover, we uncovered a parallel fundal-lateral functional organization in the IT face network, supporting dual routes (dorsal-ventral) in face processing within IT cortex as well as between IT cortex and the amygdala. Our findings of the causal relationship among the face patches demonstrate that the IT face circuit is organized into multiple functional compartments.
Collapse
Affiliation(s)
- Ning Liu
- Section on Neurocircuitry, Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA.
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
| | - Marlene Behrmann
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Janita N Turchi
- Laboratory of Neuropsychology, NIMH, NIH, Bethesda, MD, 20892, USA
| | - Galia Avidan
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, 8410501, Israel
| | - Fadila Hadj-Bouziane
- Section on Neurocircuitry, Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA
- INSERM, U1028, CNRS UMR5292, Lyon Neuroscience Research Center, ImpAct Team, F-69000, Lyon, France
- University UCBL Lyon 1, F-69000, Lyon, France
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA
| |
Collapse
|
9
|
Lombardi G, Gerbella M, Marchi M, Sciutti A, Rizzolatti G, Di Cesare G. Investigating form and content of emotional and non-emotional laughing. Cereb Cortex 2022; 33:4164-4172. [PMID: 36089830 PMCID: PMC10068279 DOI: 10.1093/cercor/bhac334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/28/2022] [Accepted: 07/29/2022] [Indexed: 11/14/2022] Open
Abstract
Abstract
As cold actions (i.e. actions devoid of an emotional content), also emotions are expressed with different vitality forms. For example, when an individual experiences a positive emotion, such as laughing as expression of happiness, this emotion can be conveyed to others by different intensities of face expressions and body postures. In the present study, we investigated whether the observation of emotions, expressed with different vitality forms, activates the same neural structures as those involved in cold action vitality forms processing. To this purpose, we carried out a functional magnetic resonance imaging study in which participants were tested in 2 conditions: emotional and non-emotional laughing both conveying different vitality forms. There are 3 main results. First, the observation of emotional and non-emotional laughing conveying different vitality forms activates the insula. Second, the observation of emotional laughing activates a series of subcortical structures known to be related to emotions. Furthermore, a region of interest analysis carried out in these structures reveals a significant modulation of the blood-oxygen-leveldependent (BOLD) signal during the processing of different vitality forms exclusively in the right amygdala, right anterior thalamus/hypothalamus, and periaqueductal gray. Third, in a subsequent electromyography study, we found a correlation between the zygomatic muscles activity and BOLD signal in the right amygdala only.
Collapse
Affiliation(s)
| | | | - Massimo Marchi
- Department of Computer Science, University of Milan, via Comelico 39, 20135 Milano, Italy
| | - Alessandra Sciutti
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies Unit, via Melen 83, 16152 Genova, Italy
| | - Giacomo Rizzolatti
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, via Volturno 39/E, 43125 Parma, Italy
| | - Giuseppe Di Cesare
- Corresponding author: Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies Unit, Genova, Italy.
| |
Collapse
|
10
|
Diehl MM, Plakke B, Albuquerque E, Romanski LM. Representation of expression and identity by ventral prefrontal neurons. Neuroscience 2022; 496:243-260. [PMID: 35654293 PMCID: PMC10363293 DOI: 10.1016/j.neuroscience.2022.05.033] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 05/20/2022] [Accepted: 05/25/2022] [Indexed: 01/26/2023]
Abstract
Evidence has suggested that the ventrolateral prefrontal cortex (VLPFC) processes social stimuli, including faces and vocalizations, which are essential for communication. Features embedded within audiovisual stimuli, including emotional expression and caller identity, provide abundant information about an individual's intention, emotional state, motivation, and social status, which are important to encode in a social exchange. However, it is unknown to what extent the VLPFC encodes such features. To investigate the role of VLPFC during social communication, we recorded single-unit activity while rhesus macaques (Macaca mulatta) performed a nonmatch-to-sample task using species-specific face-vocalization stimuli that differed in emotional expression or caller identity. 75% of recorded cells were task-related and of these >70% were responsive during the nonmatch period. A larger proportion of nonmatch cells encoded the stimulus rather than the context of the trial type. A subset of responsive neurons were most commonly modulated by the identity of the nonmatch stimulus and less by the emotional expression, or both features within the face-vocalization stimuli presented during the nonmatch period. Neurons encoding identity were found in VLPFC across a broader region than expression related cells which were confined to only the anterolateral portion of the recording chamber in VLPFC. These findings suggest that, within a working memory paradigm, VLPFC processes features of face and vocal stimuli, such as emotional expression and identity, in addition to task and contextual information. Thus, stimulus and contextual information may be integrated by VLPFC during social communication.
Collapse
|
11
|
Taubert J, Wardle SG, Tardiff CT, Koele EA, Kumar S, Messinger A, Ungerleider LG. The cortical and subcortical correlates of face pareidolia in the macaque brain. Soc Cogn Affect Neurosci 2022; 17:965-976. [PMID: 35445247 PMCID: PMC9629476 DOI: 10.1093/scan/nsac031] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 03/27/2022] [Accepted: 04/19/2022] [Indexed: 01/12/2023] Open
Abstract
Face detection is a foundational social skill for primates. This vital function is thought to be supported by specialized neural mechanisms; however, although several face-selective regions have been identified in both humans and nonhuman primates, there is no consensus about which region(s) are involved in face detection. Here, we used naturally occurring errors of face detection (i.e. objects with illusory facial features referred to as examples of 'face pareidolia') to identify regions of the macaque brain implicated in face detection. Using whole-brain functional magnetic resonance imaging to test awake rhesus macaques, we discovered that a subset of face-selective patches in the inferior temporal cortex, on the lower lateral edge of the superior temporal sulcus, and the amygdala respond more to objects with illusory facial features than matched non-face objects. Multivariate analyses of the data revealed differences in the representation of illusory faces across the functionally defined regions of interest. These differences suggest that the cortical and subcortical face-selective regions contribute uniquely to the detection of facial features. We conclude that face detection is supported by a multiplexed system in the primate brain.
Collapse
Affiliation(s)
- Jessica Taubert
- Correspondence should be addressed to Jessica Taubert, School of Psychology, The University of Queensland, Building 24A, St Lucia, QLD 4067, Australia. E-mail:
| | - Susan G Wardle
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Clarissa T Tardiff
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Elissa A Koele
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Susheel Kumar
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | - Adam Messinger
- Laboratory of Brain and Cognition, The National Institute of Mental Health, NIH, Bethesda, MD 20892, USA
| | | |
Collapse
|
12
|
Sliwa J, Mallet M, Christiaens M, Takahashi DY. Neural basis of multi-sensory communication in primates. ETHOL ECOL EVOL 2022. [DOI: 10.1080/03949370.2021.2024266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Julia Sliwa
- Paris Brain Institute–Institut du Cerveau, Inserm, CNRS, APHP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France
| | - Marion Mallet
- Paris Brain Institute–Institut du Cerveau, Inserm, CNRS, APHP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France
| | - Maëlle Christiaens
- Paris Brain Institute–Institut du Cerveau, Inserm, CNRS, APHP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France
| | | |
Collapse
|
13
|
Abstract
In order to understand ecologically meaningful social behaviors and their neural substrates in humans and other animals, researchers have been using a variety of social stimuli in the laboratory with a goal of extracting specific processes in real-life scenarios. However, certain stimuli may not be sufficiently effective at evoking typical social behaviors and neural responses. Here, we review empirical research employing different types of social stimuli by classifying them into five levels of naturalism. We describe the advantages and limitations while providing selected example studies for each level. We emphasize the important trade-off between experimental control and ecological validity across the five levels of naturalism. Taking advantage of newly emerging tools, such as real-time videos, virtual avatars, and wireless neural sampling techniques, researchers are now more than ever able to adopt social stimuli at a higher level of naturalism to better capture the dynamics and contingency of real-life social interaction.
Collapse
Affiliation(s)
- Siqi Fan
- Department of Psychology, Yale University, New Haven, CT 06520, USA
| | - Olga Dal Monte
- Department of Psychology, Yale University, New Haven, CT 06520, USA
- Department of Psychology, University of Turin, Torino, Italy
| | - Steve W.C. Chang
- Department of Psychology, Yale University, New Haven, CT 06520, USA
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, USA
- Kavli Institute for Neuroscience, Yale University School of Medicine, New Haven, CT 06510, USA
- Wu Tsai Institute, Yale University, New Haven, CT 06510, USA
| |
Collapse
|
14
|
Ainsworth M, Sallet J, Joly O, Kyriazis D, Kriegeskorte N, Duncan J, Schüffelgen U, Rushworth MFS, Bell AH. Viewing Ambiguous Social Interactions Increases Functional Connectivity between Frontal and Temporal Nodes of the Social Brain. J Neurosci 2021; 41:6070-6086. [PMID: 34099508 PMCID: PMC8276745 DOI: 10.1523/jneurosci.0870-20.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 04/19/2021] [Accepted: 04/28/2021] [Indexed: 11/25/2022] Open
Abstract
Social behavior is coordinated by a network of brain regions, including those involved in the perception of social stimuli and those involved in complex functions, such as inferring perceptual and mental states and controlling social interactions. The properties and function of many of these regions in isolation are relatively well understood, but less is known about how these regions interact while processing dynamic social interactions. To investigate whether the functional connectivity between brain regions is modulated by social context, we collected fMRI data from male monkeys (Macaca mulatta) viewing videos of social interactions labeled as "affiliative," "aggressive," or "ambiguous." We show activation related to the perception of social interactions along both banks of the superior temporal sulcus, parietal cortex, medial and lateral frontal cortex, and the caudate nucleus. Within this network, we show that fronto-temporal functional connectivity is significantly modulated by social context. Crucially, we link the observation of specific behaviors to changes in functional connectivity within our network. Viewing aggressive behavior was associated with a limited increase in temporo-temporal and a weak increase in cingulate-temporal connectivity. By contrast, viewing interactions where the outcome was uncertain was associated with a pronounced increase in temporo-temporal, and cingulate-temporal functional connectivity. We hypothesize that this widespread network synchronization occurs when cingulate and temporal areas coordinate their activity when more difficult social inferences are being made.SIGNIFICANCE STATEMENT Processing social information from our environment requires the activation of several brain regions, which are concentrated within the frontal and temporal lobes. However, little is known about how these areas interact to facilitate the processing of different social interactions. Here we show that functional connectivity within and between the frontal and temporal lobes is modulated by social context. Specifically, we demonstrate that viewing social interactions where the outcome was unclear is associated with increased synchrony within and between the cingulate cortex and temporal cortices. These findings suggest that the coordination between the cingulate and temporal cortices is enhanced when more difficult social inferences are being made.
Collapse
Affiliation(s)
- Matthew Ainsworth
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom, CB2 7EF
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, OX2 6GG
| | - Jérôme Sallet
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, OX2 6GG
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom, OX3 9DU
- Inserm, Stem Cell and Brain Research Institute U1208, Université Lyon 1, 69500 Bron, France
| | - Olivier Joly
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom, CB2 7EF
| | - Diana Kyriazis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom, CB2 7EF
| | - Nikolaus Kriegeskorte
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom, CB2 7EF
- Zuckerman Mind Brain Institute, Columbia University, New York, New York, NY 10027
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom, CB2 7EF
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, OX2 6GG
| | - Urs Schüffelgen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, OX2 6GG
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom, OX3 9DU
| | - Matthew F S Rushworth
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, OX2 6GG
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom, OX3 9DU
| | - Andrew H Bell
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom, CB2 7EF
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom, OX2 6GG
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom, OX3 9DU
| |
Collapse
|
15
|
Taubert N, Stettler M, Siebert R, Spadacenta S, Sting L, Dicke P, Thier P, Giese MA. Shape-invariant encoding of dynamic primate facial expressions in human perception. eLife 2021; 10:61197. [PMID: 34115584 PMCID: PMC8195610 DOI: 10.7554/elife.61197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 04/22/2021] [Indexed: 11/30/2022] Open
Abstract
Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.
Collapse
Affiliation(s)
- Nick Taubert
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| | - Michael Stettler
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany.,International Max Planck Research School for Intelligent Systems (IMPRS-IS), Tübingen, Germany
| | - Ramona Siebert
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Silvia Spadacenta
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Louisa Sting
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| | - Peter Dicke
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Peter Thier
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Martin A Giese
- Section for Computational Sensomotorics, Centre for Integrative Neuroscience & Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| |
Collapse
|
16
|
Pitzalis S, Hadj-Bouziane F, Dal Bò G, Guedj C, Strappini F, Meunier M, Farnè A, Fattori P, Galletti C. Optic flow selectivity in the macaque parieto-occipital sulcus. Brain Struct Funct 2021; 226:2911-2930. [PMID: 34043075 DOI: 10.1007/s00429-021-02293-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 05/08/2021] [Indexed: 01/16/2023]
Abstract
In humans, several neuroimaging studies have demonstrated that passive viewing of optic flow stimuli activates higher-level motion areas, like V6 and the cingulate sulcus visual area (CSv). In macaque, there are few studies on the sensitivity of V6 and CSv to egomotion compatible optic flow. The only fMRI study on this issue revealed selectivity to egomotion compatible optic flow in macaque CSv but not in V6 (Cotterau et al. Cereb Cortex 27(1):330-343, 2017, but see Fan et al. J Neurosci. 35:16303-16314, 2015). Yet, it is unknown whether monkey visual motion areas MT + and V6 display any distinctive fMRI functional profile relative to the optic flow stimulation, as it is the case for the homologous human areas (Pitzalis et al., Cereb Cortex 20(2):411-424, 2010). Here, we described the sensitivity of the monkey brain to two motion stimuli (radial rings and flow fields) originally used in humans to functionally map the motion middle temporal area MT + (Tootell et al. J Neurosci 15: 3215-3230, 1995a; Nature 375:139-141, 1995b) and the motion medial parietal area V6 (Pitzalis et al. 2010), respectively. In both animals, we found regions responding only to optic flow or radial rings stimulation, and regions responding to both stimuli. A region in the parieto-occipital sulcus (likely including V6) was one of the most highly selective area for coherently moving fields of dots, further demonstrating the power of this type of stimulation to activate V6 in both humans and monkeys. We did not find any evidence that putative macaque CSv responds to Flow Fields.
Collapse
Affiliation(s)
- Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Fadila Hadj-Bouziane
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Giulia Dal Bò
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Carole Guedj
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | | | - Martine Meunier
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
17
|
Avidan G, Behrmann M. Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia. Annu Rev Vis Sci 2021; 7:301-321. [PMID: 34014762 DOI: 10.1146/annurev-vision-113020-012740] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Congenital prosopagnosia (CP), a life-long impairment in face processing that occurs in the absence of any apparent brain damage, provides a unique model in which to explore the psychological and neural bases of normal face processing. The goal of this review is to offer a theoretical and conceptual framework that may account for the underlying cognitive and neural deficits in CP. This framework may also provide a novel perspective in which to reconcile some conflicting results that permits the expansion of the research in this field in new directions. The crux of this framework lies in linking the known behavioral and neural underpinnings of face processing and their impairments in CP to a model incorporating grid cell-like activity in the entorhinal cortex. Moreover, it stresses the involvement of active, spatial scanning of the environment with eye movements and implicates their critical role in face encoding and recognition. To begin with, we describe the main behavioral and neural characteristics of CP, and then lay down the building blocks of our proposed model, referring to the existing literature supporting this new framework. We then propose testable predictions and conclude with open questions for future research stemming from this model. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Galia Avidan
- Department of Psychology and Department of Cognitive and Brain Sciences, Ben-Gurion University of the Negev, Beer-Sheva 8410501, Israel;
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| |
Collapse
|
18
|
The impact of facemasks on emotion recognition, trust attribution and re-identification. Sci Rep 2021; 11:5577. [PMID: 33692417 PMCID: PMC7970937 DOI: 10.1038/s41598-021-84806-5] [Citation(s) in RCA: 112] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 02/19/2021] [Indexed: 01/02/2023] Open
Abstract
Covid-19 pandemics has fostered a pervasive use of facemasks all around the world. While they help in preventing infection, there are concerns related to the possible impact of facemasks on social communication. The present study investigates how emotion recognition, trust attribution and re-identification of faces differ when faces are seen without mask, with a standard medical facemask, and with a transparent facemask restoring visual access to the mouth region. Our results show that, in contrast to standard medical facemasks, transparent masks significantly spare the capability to recognize emotional expressions. Moreover, transparent masks spare the capability to infer trustworthiness from faces with respect to standard medical facemasks which, in turn, dampen the perceived untrustworthiness of faces. Remarkably, while transparent masks (unlike standard masks) do not impair emotion recognition and trust attribution, they seemingly do impair the subsequent re-identification of the same, unmasked, face (like standard masks). Taken together, this evidence supports a dissociation between mechanisms sustaining emotion and identity processing. This study represents a pivotal step in the much-needed analysis of face reading when the lower portion of the face is occluded by a facemask.
Collapse
|
19
|
Liu N, Zhang H, Zhang X, Yang J, Weng X, Chen L. In Memory of Leslie G. Ungerleider. Neurosci Bull 2021; 37:592-595. [PMID: 33675525 DOI: 10.1007/s12264-021-00648-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 01/12/2021] [Indexed: 11/25/2022] Open
Affiliation(s)
- Ning Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Hui Zhang
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100191, China
| | - Xilin Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, 510631, Guangdong, China
- School of Psychology, South China Normal University, Guangzhou, 510631, Guangdong, China
| | - Jiongjiong Yang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
| | - Xuchu Weng
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, 510631, Guangdong, China
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, 510631, Guangdong, China
| | - Lin Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, 100101, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| |
Collapse
|
20
|
Pitcher D, Ungerleider LG. Evidence for a Third Visual Pathway Specialized for Social Perception. Trends Cogn Sci 2021; 25:100-110. [PMID: 33334693 PMCID: PMC7811363 DOI: 10.1016/j.tics.2020.11.006] [Citation(s) in RCA: 173] [Impact Index Per Article: 57.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 11/18/2020] [Accepted: 11/18/2020] [Indexed: 11/20/2022]
Abstract
Existing models propose that primate visual cortex is divided into two functionally distinct pathways. The ventral pathway computes the identity of an object; the dorsal pathway computes the location of an object, and the actions related to that object. Despite remaining influential, the two visual pathways model requires revision. Both human and non-human primate studies reveal the existence of a third visual pathway on the lateral brain surface. This third pathway projects from early visual cortex, via motion-selective areas, into the superior temporal sulcus (STS). Studies demonstrating that the STS computes the actions of moving faces and bodies (e.g., expressions, eye-gaze, audio-visual integration, intention, and mood) show that the third visual pathway is specialized for the dynamic aspects of social perception.
Collapse
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, York, YO10 5DD, UK.
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA
| |
Collapse
|
21
|
Liang Y, Liu B. Cross-Subject Commonality of Emotion Representations in Dorsal Motion-Sensitive Areas. Front Neurosci 2020; 14:567797. [PMID: 33177977 PMCID: PMC7591793 DOI: 10.3389/fnins.2020.567797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Accepted: 09/22/2020] [Indexed: 11/13/2022] Open
Abstract
Emotion perception is a crucial question in cognitive neuroscience and the underlying neural substrates have been the subject of intense study. One of our previous studies demonstrated that motion-sensitive areas are involved in the perception of facial expressions. However, it remains unclear whether emotions perceived from whole-person stimuli can be decoded from the motion-sensitive areas. In addition, if emotions are represented in the motion-sensitive areas, we may further ask whether the representations of emotions in the motion-sensitive areas can be shared across individual subjects. To address these questions, this study collected neural images while participants viewed emotions (joy, anger, and fear) from videos of whole-person expressions (contained both face and body parts) in a block-design functional magnetic resonance imaging (fMRI) experiment. Multivariate pattern analysis (MVPA) was conducted to explore the emotion decoding performance in individual-defined dorsal motion-sensitive regions of interest (ROIs). Results revealed that emotions could be successfully decoded from motion-sensitive ROIs with statistically significant classification accuracies for three emotions as well as positive versus negative emotions. Moreover, results from the cross-subject classification analysis showed that a person’s emotion representation could be robustly predicted by others’ emotion representations in motion-sensitive areas. Together, these results reveal that emotions are represented in dorsal motion-sensitive areas and that the representation of emotions is consistent across subjects. Our findings provide new evidence of the involvement of motion-sensitive areas in the emotion decoding, and further suggest that there exists a common emotion code in the motion-sensitive areas across individual subjects.
Collapse
Affiliation(s)
- Yin Liang
- Faculty of Information Technology, College of Computer Science and Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
22
|
Hesse JK, Tsao DY. The macaque face patch system: a turtle’s underbelly for the brain. Nat Rev Neurosci 2020; 21:695-716. [DOI: 10.1038/s41583-020-00393-w] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/30/2020] [Indexed: 02/06/2023]
|
23
|
Parallel Processing of Facial Expression and Head Orientation in the Macaque Brain. J Neurosci 2020; 40:8119-8131. [PMID: 32928886 DOI: 10.1523/jneurosci.0524-20.2020] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 08/06/2020] [Accepted: 08/10/2020] [Indexed: 11/21/2022] Open
Abstract
When we move the features of our face, or turn our head, we communicate changes in our internal state to the people around us. How this information is encoded and used by an observer's brain is poorly understood. We investigated this issue using a functional MRI adaptation paradigm in awake male macaques. Among face-selective patches of the superior temporal sulcus (STS), we found a double dissociation of areas processing facial expression and those processing head orientation. The face-selective patches in the STS fundus were most sensitive to facial expression, as was the amygdala, whereas those on the lower, lateral edge of the sulcus were most sensitive to head orientation. The results of this study reveal a new dimension of functional organization, with face-selective patches segregating within the STS. The findings thus force a rethinking of the role of the face-processing system in representing subject-directed actions and supporting social cognition.SIGNIFICANCE STATEMENT When we are interacting with another person, we make inferences about their emotional state based on visual signals. For example, when a person's facial expression changes, we are given information about their feelings. While primates are thought to have specialized cortical mechanisms for analyzing the identity of faces, less is known about how these mechanisms unpack transient signals, like expression, that can change from one moment to the next. Here, using an fMRI adaptation paradigm, we demonstrate that while the identity of a face is held constant, there are separate mechanisms in the macaque brain for processing transient changes in the face's expression and orientation. These findings shed new light on the function of the face-processing system during social exchanges.
Collapse
|
24
|
What does a "face cell" want?'. Prog Neurobiol 2020; 195:101880. [PMID: 32918972 DOI: 10.1016/j.pneurobio.2020.101880] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2019] [Revised: 05/12/2020] [Accepted: 06/26/2020] [Indexed: 11/21/2022]
Abstract
In the 1970s Charlie Gross was among the first to identify neurons that respond selectively to faces, in the macaque inferior temporal (IT) cortex. This seminal finding has been followed by numerous studies quantifying the visual features that trigger a response from face cells in order to answer the question; what do face cells want? However, the connection between face-selective activity in IT cortex and visual perception remains only partially understood. Here we present fMRI results in the macaque showing that some face patches respond to illusory facial features in objects. We argue that to fully understand the functional role of face cells, we need to develop approaches that test the extent to which their response explains what we see.
Collapse
|
25
|
A Naturalistic Dynamic Monkey Head Avatar Elicits Species-Typical Reactions and Overcomes the Uncanny Valley. eNeuro 2020; 7:ENEURO.0524-19.2020. [PMID: 32513660 PMCID: PMC7340843 DOI: 10.1523/eneuro.0524-19.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 03/31/2020] [Accepted: 04/02/2020] [Indexed: 11/21/2022] Open
Abstract
Research on social perception in monkeys may benefit from standardized, controllable, and ethologically valid renditions of conspecifics offered by monkey avatars. However, previous work has cautioned that monkeys, like humans, show an adverse reaction toward realistic synthetic stimuli, known as the "uncanny valley" effect. We developed an improved naturalistic rhesus monkey face avatar capable of producing facial expressions (fear grin, lip smack and threat), animated by motion capture data of real monkeys. For validation, we additionally created decreasingly naturalistic avatar variants. Eight rhesus macaques were tested on the various videos and avoided looking at less naturalistic avatar variants, but not at the most naturalistic or the most unnaturalistic avatar, indicating an uncanny valley effect for the less naturalistic avatar versions. The avoidance was deepened by motion and accompanied by physiological arousal. Only the most naturalistic avatar evoked facial expressions comparable to those toward the real monkey videos. Hence, our findings demonstrate that the uncanny valley reaction in monkeys can be overcome by a highly naturalistic avatar.
Collapse
|
26
|
Furl N, Begum F, Sulik J, Ferrarese FP, Jans S, Woolley C. Face space representations of movement. Neuroimage 2020; 212:116676. [DOI: 10.1016/j.neuroimage.2020.116676] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 01/31/2020] [Accepted: 02/20/2020] [Indexed: 10/24/2022] Open
|
27
|
Zhang H, Japee S, Stacy A, Flessert M, Ungerleider LG. Anterior superior temporal sulcus is specialized for non-rigid facial motion in both monkeys and humans. Neuroimage 2020; 218:116878. [PMID: 32360168 PMCID: PMC7478875 DOI: 10.1016/j.neuroimage.2020.116878] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Revised: 04/18/2020] [Accepted: 04/23/2020] [Indexed: 02/01/2023] Open
Abstract
Facial motion plays a fundamental role in the recognition of facial expressions in primates, but the neural substrates underlying this special type of biological motion are not well understood. Here, we used fMRI to investigate the extent to which the specialization for facial motion is represented in the visual system and compared the neural mechanisms for the processing of non-rigid facial motion in macaque monkeys and humans. We defined the areas specialized for facial motion as those significantly more activated when subjects perceived the motion caused by dynamic faces (dynamic faces > static faces) than when they perceived the motion caused by dynamic non-face objects (dynamic objects > static objects). We found that, in monkeys, significant activations evoked by facial motion were in the fundus of anterior superior temporal sulcus (STS), which overlapped the anterior fundus face patch. In humans, facial motion activated three separate foci in the right STS: posterior, middle, and anterior STS, with the anterior STS location showing the most selectivity for facial motion compared with other facial motion areas. In both monkeys and humans, facial motion shows a gradient preference as one progresses anteriorly along the STS. Taken together, our results indicate that monkeys and humans share similar neural substrates within the anterior temporal lobe specialized for the processing of non-rigid facial motion.
Collapse
Affiliation(s)
- Hui Zhang
- Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China.
| | - Shruti Japee
- Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA
| | - Andrea Stacy
- Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA
| | - Molly Flessert
- Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, MD, 20892, USA
| | | |
Collapse
|
28
|
Liang Y, Liu B, Ji J, Li X. Network Representations of Facial and Bodily Expressions: Evidence From Multivariate Connectivity Pattern Classification. Front Neurosci 2019; 13:1111. [PMID: 31736683 PMCID: PMC6828617 DOI: 10.3389/fnins.2019.01111] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 10/02/2019] [Indexed: 01/21/2023] Open
Abstract
Emotions can be perceived from both facial and bodily expressions. Our previous study has found the successful decoding of facial expressions based on the functional connectivity (FC) patterns. However, the role of the FC patterns in the recognition of bodily expressions remained unclear, and no neuroimaging studies have adequately addressed the question of whether emotions perceiving from facial and bodily expressions are processed rely upon common or different neural networks. To address this, the present study collected functional magnetic resonance imaging (fMRI) data from a block design experiment with facial and bodily expression videos as stimuli (three emotions: anger, fear, and joy), and conducted multivariate pattern classification analysis based on the estimated FC patterns. We found that in addition to the facial expressions, bodily expressions could also be successfully decoded based on the large-scale FC patterns. The emotion classification accuracies for the facial expressions were higher than that for the bodily expressions. Further contributive FC analysis showed that emotion-discriminative networks were widely distributed in both hemispheres, containing regions that ranged from primary visual areas to higher-level cognitive areas. Moreover, for a particular emotion, discriminative FCs for facial and bodily expressions were distinct. Together, our findings highlight the key role of the FC patterns in the emotion processing, indicating how large-scale FC patterns reconfigure in processing of facial and bodily expressions, and suggest the distributed neural representation for the emotion recognition. Furthermore, our results also suggest that the human brain employs separate network representations for facial and bodily expressions of the same emotions. This study provides new evidence for the network representations for emotion perception and may further our understanding of the potential mechanisms underlying body language emotion recognition.
Collapse
Affiliation(s)
- Yin Liang
- Faculty of Information Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Baolin Liu
- Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology, Tianjin University, Tianjin, China.,School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China.,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| | - Junzhong Ji
- Faculty of Information Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| |
Collapse
|
29
|
Kovacs-Balint Z, Feczko E, Pincus M, Earl E, Miranda-Dominguez O, Howell B, Morin E, Maltbie E, LI L, Steele J, Styner M, Bachevalier J, Fair D, Sanchez M. Early Developmental Trajectories of Functional Connectivity Along the Visual Pathways in Rhesus Monkeys. Cereb Cortex 2019; 29:3514-3526. [PMID: 30272135 PMCID: PMC6644858 DOI: 10.1093/cercor/bhy222] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 07/23/2018] [Accepted: 08/19/2018] [Indexed: 12/30/2022] Open
Abstract
Early social interactions shape the development of social behavior, although the critical periods or the underlying neurodevelopmental processes are not completely understood. Here, we studied the developmental changes in neural pathways underlying visual social engagement in the translational rhesus monkey model. Changes in functional connectivity (FC) along the ventral object and motion pathways and the dorsal attention/visuo-spatial pathways were studied longitudinally using resting-state functional MRI in infant rhesus monkeys, from birth through early weaning (3 months), given the socioemotional changes experienced during this period. Our results revealed that (1) maturation along the visual pathways proceeds in a caudo-rostral progression with primary visual areas (V1-V3) showing strong FC as early as 2 weeks of age, whereas higher-order visual and attentional areas (e.g., MT-AST, LIP-FEF) show weak FC; (2) functional changes were pathway-specific (e.g., robust FC increases detected in the most anterior aspect of the object pathway (TE-AMY), but FC remained weak in the other pathways (e.g., AST-AMY)); (3) FC matures similarly in both right and left hemispheres. Our findings suggest that visual pathways in infant macaques undergo selective remodeling during the first 3 months of life, likely regulated by early social interactions and supporting the transition to independence from the mother.
Collapse
Affiliation(s)
- Z Kovacs-Balint
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
| | - E Feczko
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
- Department of Psychiatry & Behavioral Science, Emory University, Atlanta, GA, USA
- Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, Portland OR, USA
| | - M Pincus
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
| | - E Earl
- Department of Behavioral Neuroscience, Oregon Health & Science University, Portland, OR, USA
| | - O Miranda-Dominguez
- Department of Behavioral Neuroscience, Oregon Health & Science University, Portland, OR, USA
| | - B Howell
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
- Department of Psychiatry & Behavioral Science, Emory University, Atlanta, GA, USA
| | - E Morin
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
- Department of Psychiatry & Behavioral Science, Emory University, Atlanta, GA, USA
| | - E Maltbie
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
| | - L LI
- Department of Pediatrics, Emory University, Atlanta, GA, USA
| | - J Steele
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
| | - M Styner
- Department of Psychiatry, University of North Carolina, Chapel Hill, NC, USA
| | - J Bachevalier
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
- Department of Psychology, Emory University, Atlanta, GA, USA
| | - D Fair
- Department of Behavioral Neuroscience, Oregon Health & Science University, Portland, OR, USA
| | - M Sanchez
- Yerkes National Primate Research Center, Emory University, Atlanta, GA, USA
- Department of Psychiatry & Behavioral Science, Emory University, Atlanta, GA, USA
| |
Collapse
|
30
|
Pathways for smiling, disgust and fear recognition in blindsight patients. Neuropsychologia 2019; 128:6-13. [DOI: 10.1016/j.neuropsychologia.2017.08.028] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 08/03/2017] [Accepted: 08/28/2017] [Indexed: 01/08/2023]
|
31
|
Sliwa J, Takahashi D, Shepherd S. Mécanismes neuronaux pour la communication chez les primates. REVUE DE PRIMATOLOGIE 2018. [DOI: 10.4000/primatologie.2950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
32
|
Liang Y, Liu B, Li X, Wang P. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity. Front Hum Neurosci 2018; 12:94. [PMID: 29615882 PMCID: PMC5868121 DOI: 10.3389/fnhum.2018.00094] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Accepted: 02/27/2018] [Indexed: 01/15/2023] Open
Abstract
It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.
Collapse
Affiliation(s)
- Yin Liang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| |
Collapse
|
33
|
Morel A, Peyroux E, Leleu A, Favre E, Franck N, Demily C. Overview of Social Cognitive Dysfunctions in Rare Developmental Syndromes With Psychiatric Phenotype. Front Pediatr 2018; 6:102. [PMID: 29774207 PMCID: PMC5943552 DOI: 10.3389/fped.2018.00102] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Accepted: 03/27/2018] [Indexed: 12/26/2022] Open
Abstract
Rare neurodevelopmental syndromes often present social cognitive deficits that may underlie difficulties in social interactions and increase the risk of psychosis or autism spectrum disorders. However, little is known regarding the specificities of social cognitive impairment across syndromes while it remains a major challenge for the care. Our review provides an overview of social cognitive dysfunctions in rare diseases associated with psychiatric symptoms (with a prevalence estimated between 1 in 1,200 and 1 in 25,000 live births: 22q11.2 deletion syndrome, Angelman syndrome, Fragile X syndrome, Klinefelter syndrome, Prader-Willi syndrome, Rett syndrome, Smith-Magenis syndrome, Turner syndrome, and Williams syndrome) and shed some light on the specific mechanisms that may underlie these skills in each clinical presentation. We first detail the different processes included in the generic expression "social cognition" before summarizing the genotype, psychiatric phenotype, and non-social cognitive profile in each syndrome. Then, we offer a systematic review of the social cognitive abilities and the disturbed mechanisms they are likely associated with. We followed the PRISMA process, including the definition of the relevant search terms, the selection of studies based on clear inclusion, and exclusion criteria and the quality appraisal of papers. We finally provide insights that may have considerable influence on the development of adapted therapeutic interventions such as social cognitive training (SCT) therapies specifically designed to target the psychiatric phenotype. The results of this review suggest that social cognition impairments share some similarities across syndromes. We propose that social cognitive impairments are strongly involved in behavioral symptoms regardless of the overall cognitive level measured by intelligence quotient. Better understanding the mechanisms underlying impaired social cognition may lead to adapt therapeutic interventions. The studies targeting social cognition processes offer new thoughts about the development of specific cognitive training programs, as they highlight the importance of connecting neurocognitive and SCT techniques.
Collapse
Affiliation(s)
- Aurore Morel
- Scientific Brain Training, Reference Center for Rare Diseases GénoPsy, CH Le Vinatier, UMR 5229, Université Lyon 1, CNRS, Lyon, France
| | - Elodie Peyroux
- Reference Center for Rare Diseases GénoPsy, SUR/CL3R: Service Universitaire de Réhabilitation, CH Le Vinatier, UMR 5229, Université Lyon 1, CNRS, Lyon, France
| | - Arnaud Leleu
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, INRA, Université Bourgogne Franche-Comté, CNRS, Dijon, France
| | - Emilie Favre
- Reference Center for Rare Diseases GénoPsy, CH Le Vinatier, UMR 5229, Université Lyon 1, CNRS, Lyon, France
| | - Nicolas Franck
- Centre ressource de réhabilitation psychosociale et de remédiation cognitive, CH Le Vinatier, Lyon et UMR 5229 (CNRS and Université Lyon), Lyon, France
| | - Caroline Demily
- Reference Center for Rare Diseases GénoPsy, CH Le Vinatier, UMR 5229, Université Lyon 1, CNRS, Lyon, France
| |
Collapse
|
34
|
Gothard KM, Mosher CP, Zimmerman PE, Putnam PT, Morrow JK, Fuglevand AJ. New perspectives on the neurophysiology of primate amygdala emerging from the study of naturalistic social behaviors. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2017; 9. [PMID: 28800678 DOI: 10.1002/wcs.1449] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Revised: 06/03/2017] [Accepted: 06/05/2017] [Indexed: 11/07/2022]
Abstract
A major challenge of primate neurophysiology, particularly in the domain of social neuroscience, is to adopt more natural behaviors without compromising the ability to relate patterns of neural activity to specific actions or sensory inputs. Traditional approaches have identified neural activity patterns in the amygdala in response to simplified versions of social stimuli such as static images of faces. As a departure from this reduced approach, single images of faces were replaced with arrays of images or videos of conspecifics. These stimuli elicited more natural behaviors and new types of neural responses: (1) attention-gated responses to faces, (2) selective responses to eye contact, and (3) selective responses to touch and somatosensory feedback during the production of facial expressions. An additional advance toward more natural social behaviors in the laboratory was the implementation of dyadic social interactions. Under these conditions, neurons encoded similarly rewards that monkeys delivered to self and to their social partner. These findings reinforce the value of bringing natural, ethologically valid, behavioral tasks under neurophysiological scrutiny. WIREs Cogn Sci 2018, 9:e1449. doi: 10.1002/wcs.1449 This article is categorized under: Psychology > Emotion and Motivation Neuroscience > Cognition Neuroscience > Physiology.
Collapse
Affiliation(s)
- Katalin M Gothard
- Department of Physiology, College of Medicine, University of Arizona, Tucson, AZ, USA
| | - Clayton P Mosher
- Department of Physiology, College of Medicine, University of Arizona, Tucson, AZ, USA
| | - Prisca E Zimmerman
- Department of Physiology, College of Medicine, University of Arizona, Tucson, AZ, USA
| | - Philip T Putnam
- Department of Physiology, College of Medicine, University of Arizona, Tucson, AZ, USA
| | - Jeremiah K Morrow
- Department of Physiology, College of Medicine, University of Arizona, Tucson, AZ, USA
| | - Andrew J Fuglevand
- Department of Physiology, College of Medicine, University of Arizona, Tucson, AZ, USA
| |
Collapse
|
35
|
Landi SM, Freiwald WA. Two areas for familiar face recognition in the primate brain. Science 2017; 357:591-595. [PMID: 28798130 PMCID: PMC5612776 DOI: 10.1126/science.aan1139] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2017] [Accepted: 07/06/2017] [Indexed: 01/07/2023]
Abstract
Familiarity alters face recognition: Familiar faces are recognized more accurately than unfamiliar ones and under difficult viewing conditions when unfamiliar face recognition fails. The neural basis for this fundamental difference remains unknown. Using whole-brain functional magnetic resonance imaging, we found that personally familiar faces engage the macaque face-processing network more than unfamiliar faces. Familiar faces also recruited two hitherto unknown face areas at anatomically conserved locations within the perirhinal cortex and the temporal pole. These two areas, but not the core face-processing network, responded to familiar faces emerging from a blur with a characteristic nonlinear surge, akin to the abruptness of familiar face recognition. In contrast, responses to unfamiliar faces and objects remained linear. Thus, two temporal lobe areas extend the core face-processing network into a familiar face-recognition system.
Collapse
Affiliation(s)
- Sofia M Landi
- The Rockefeller University, 1230 York Avenue, New York, NY 10065, USA.
| | | |
Collapse
|
36
|
Perdikis D, Volhard J, Müller V, Kaulard K, Brick TR, Wallraven C, Lindenberger U. Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics. PLoS One 2017; 12:e0181225. [PMID: 28723957 PMCID: PMC5517022 DOI: 10.1371/journal.pone.0181225] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 06/28/2017] [Indexed: 11/19/2022] Open
Abstract
Research on the perception of facial emotional expressions (FEEs) often uses static images that do not capture the dynamic character of social coordination in natural settings. Recent behavioral and neuroimaging studies suggest that dynamic FEEs (videos or morphs) enhance emotion perception. To identify mechanisms associated with the perception of FEEs with natural dynamics, the present EEG (Electroencephalography)study compared (i) ecologically valid stimuli of angry and happy FEEs with natural dynamics to (ii) FEEs with unnatural dynamics, and to (iii) static FEEs. FEEs with unnatural dynamics showed faces moving in a biologically possible but unpredictable and atypical manner, generally resulting in ambivalent emotional content. Participants were asked to explicitly recognize FEEs. Using whole power (WP) and phase synchrony (Phase Locking Index, PLI), we found that brain responses discriminated between natural and unnatural FEEs (both static and dynamic). Differences were primarily observed in the timing and brain topographies of delta and theta PLI and WP, and in alpha and beta WP. Our results support the view that biologically plausible, albeit atypical, FEEs are processed by the brain by different mechanisms than natural FEEs. We conclude that natural movement dynamics are essential for the perception of FEEs and the associated brain processes.
Collapse
Affiliation(s)
- Dionysios Perdikis
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
- * E-mail:
| | - Jakob Volhard
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Viktor Müller
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Kathrin Kaulard
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Timothy R. Brick
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
37
|
Furl N, Lohse M, Pizzorni-Ferrarese F. Low-frequency oscillations employ a general coding of the spatio-temporal similarity of dynamic faces. Neuroimage 2017; 157:486-499. [PMID: 28619657 PMCID: PMC6390175 DOI: 10.1016/j.neuroimage.2017.06.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2016] [Revised: 06/01/2017] [Accepted: 06/09/2017] [Indexed: 12/14/2022] Open
Abstract
Brain networks use neural oscillations as information transfer mechanisms. Although the face perception network in occipitotemporal cortex is well-studied, contributions of oscillations to face representation remain an open question. We tested for links between oscillatory responses that encode facial dimensions and the theoretical proposal that faces are encoded in similarity-based "face spaces". We quantified similarity-based encoding of dynamic faces in magnetoencephalographic sensor-level oscillatory power for identity, expression, physical and perceptual similarity of facial form and motion. Our data show that evoked responses manifest physical and perceptual form similarity that distinguishes facial identities. Low-frequency induced oscillations (< 20Hz) manifested more general similarity structure, which was not limited to identity, and spanned physical and perceived form and motion. A supplementary fMRI-constrained source reconstruction implicated fusiform gyrus and V5 in this similarity-based representation. These findings introduce a potential link between "face space" encoding and oscillatory network communication, which generates new hypotheses about the potential oscillation-mediated mechanisms that might encode facial dimensions.
Collapse
Affiliation(s)
- Nicholas Furl
- Department of Psychology, Royal Holloway, University of London, Surrey TW20 0EX, United Kingdom; Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom.
| | - Michael Lohse
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom; Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3QX, United Kingdom
| | | |
Collapse
|
38
|
Liang Y, Liu B, Xu J, Zhang G, Li X, Wang P, Wang B. Decoding facial expressions based on face-selective and motion-sensitive areas. Hum Brain Mapp 2017; 38:3113-3125. [PMID: 28345150 PMCID: PMC6866795 DOI: 10.1002/hbm.23578] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 02/21/2017] [Accepted: 03/09/2017] [Indexed: 11/07/2022] Open
Abstract
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Yin Liang
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
| | - Baolin Liu
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
- State Key Laboratory of Intelligent Technology and SystemsNational Laboratory for Information Science and Technology, Tsinghua UniversityBeijing100084People's Republic of China
| | - Junhai Xu
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
| | - Gaoyan Zhang
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical UniversityYantaiShandong264003People's Republic of China
| | - Peiyuan Wang
- Department of RadiologyYantai Affiliated Hospital of Binzhou Medical UniversityYantaiShandong264003People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical UniversityYantaiShandong264003People's Republic of China
| |
Collapse
|
39
|
Anzellotti S, Kliemann D, Jacoby N, Saxe R. Directed network discovery with dynamic network modelling. Neuropsychologia 2017; 99:1-11. [PMID: 28215697 DOI: 10.1016/j.neuropsychologia.2017.02.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 01/18/2017] [Accepted: 02/03/2017] [Indexed: 12/13/2022]
Abstract
Cognitive tasks recruit multiple brain regions. Understanding how these regions influence each other (the network structure) is an important step to characterize the neural basis of cognitive processes. Often, limited evidence is available to restrict the range of hypotheses a priori, and techniques that sift efficiently through a large number of possible network structures are needed (network discovery). This article introduces a novel modelling technique for network discovery (Dynamic Network Modelling or DNM) that builds on ideas from Granger Causality and Dynamic Causal Modelling introducing three key changes: (1) efficient network discovery is implemented with statistical tests on the consistency of model parameters across participants, (2) the tests take into account the magnitude and sign of each influence, and (3) variance explained in independent data is used as an absolute (rather than relative) measure of the quality of the network model. In this article, we outline the functioning of DNM, we validate DNM in simulated data for which the ground truth is known, and we report an example of its application to the investigation of influences between regions during emotion recognition, revealing top-down influences from brain regions encoding abstract representations of emotions (medial prefrontal cortex and superior temporal sulcus) onto regions engaged in the perceptual analysis of facial expressions (occipital face area and fusiform face area) when participants are asked to switch between reporting the emotional valence and the age of a face.
Collapse
|
40
|
Behrmann M, Scherf KS, Avidan G. Neural mechanisms of face perception, their emergence over development, and their breakdown. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 7:247-63. [PMID: 27196333 DOI: 10.1002/wcs.1388] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 03/17/2016] [Accepted: 03/27/2016] [Indexed: 02/03/2023]
Abstract
Face perception is probably the most developed visual perceptual skill in humans, most likely as a result of its unique evolutionary and social significance. Much recent research has converged to identify a host of relevant psychological mechanisms that support face recognition. In parallel, there has been substantial progress in uncovering the neural mechanisms that mediate rapid and accurate face perception, with specific emphasis on a broadly distributed neural circuit, comprised of multiple nodes whose joint activity supports face perception. This article focuses specifically on the neural underpinnings of face recognition, and reviews recent structural and functional imaging studies that elucidate the neural basis of this ability. In addition, the article covers some of the recent investigations that characterize the emergence of the neural basis of face recognition over the course of development, and explores the relationship between these changes and increasing behavioural competence. This paper also describes studies that characterize the nature of the breakdown of face recognition in individuals who are impaired in face recognition, either as a result of brain damage acquired at some point or as a result of the failure to master face recognition over the course of development. Finally, information regarding similarities between the neural circuits for face perception in humans and in nonhuman primates is briefly covered, as is the contribution of subcortical regions to face perception. WIREs Cogn Sci 2016, 7:247-263. doi: 10.1002/wcs.1388 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Marlene Behrmann
- Department of Psychology and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA
| | - K Suzanne Scherf
- Department of Psychology, Pennsylvania State University, University Park, PA, USA
| | - Galia Avidan
- Department of Psychology, Ben Gurion University of the Negev, Beer Sheva, Israel
| |
Collapse
|
41
|
Single-unit activity during natural vision: diversity, consistency, and spatial sensitivity among AF face patch neurons. J Neurosci 2015; 35:5537-48. [PMID: 25855170 DOI: 10.1523/jneurosci.3825-14.2015] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Several visual areas within the STS of the macaque brain respond strongly to faces and other biological stimuli. Determining the principles that govern neural responses in this region has proven challenging, due in part to the inherently complex stimulus domain of dynamic biological stimuli that are not captured by an easily parameterized stimulus set. Here we investigated neural responses in one fMRI-defined face patch in the anterior fundus (AF) of the STS while macaques freely view complex videos rich with natural social content. Longitudinal single-unit recordings allowed for the accumulation of each neuron's responses to repeated video presentations across sessions. We found that individual neurons, while diverse in their response patterns, were consistently and deterministically driven by the video content. We used principal component analysis to compute a family of eigenneurons, which summarized 24% of the shared population activity in the first two components. We found that the most prominent component of AF activity reflected an interaction between visible body region and scene layout. Close-up shots of faces elicited the strongest neural responses, whereas far away shots of faces or close-up shots of hindquarters elicited weak or inhibitory responses. Sensitivity to the apparent proximity of faces was also observed in gamma band local field potential. This category-selective sensitivity to spatial scale, together with the known exchange of anatomical projections of this area with regions involved in visuospatial analysis, suggests that the AF face patch may be specialized in aspects of face perception that pertain to the layout of a social scene.
Collapse
|
42
|
Ortiz-Rios M, Kuśmierek P, DeWitt I, Archakov D, Azevedo FAC, Sams M, Jääskeläinen IP, Keliris GA, Rauschecker JP. Functional MRI of the vocalization-processing network in the macaque brain. Front Neurosci 2015; 9:113. [PMID: 25883546 PMCID: PMC4381638 DOI: 10.3389/fnins.2015.00113] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 03/17/2015] [Indexed: 12/12/2022] Open
Abstract
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; IMPRS for Cognitive and Systems Neuroscience Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| | - Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Frederico A C Azevedo
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; IMPRS for Cognitive and Systems Neuroscience Tübingen, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Georgios A Keliris
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; Bernstein Centre for Computational Neuroscience Tübingen, Germany ; Department of Biomedical Sciences, University of Antwerp Wilrijk, Belgium
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland ; Institute for Advanced Study and Department of Neurology, Klinikum Rechts der Isar, Technische Universität München München, Germany
| |
Collapse
|
43
|
Abstract
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion.
Collapse
|
44
|
Russ BE, Leopold DA. Functional MRI mapping of dynamic visual features during natural viewing in the macaque. Neuroimage 2015; 109:84-94. [PMID: 25579448 DOI: 10.1016/j.neuroimage.2015.01.012] [Citation(s) in RCA: 66] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Revised: 10/24/2014] [Accepted: 01/05/2015] [Indexed: 10/24/2022] Open
Abstract
The ventral visual pathway of the primate brain is specialized to respond to stimuli in certain categories, such as the well-studied face selective patches in the macaque inferotemporal cortex. To what extent does response selectivity determined using brief presentations of isolated stimuli predict activity during the free viewing of a natural, dynamic scene, where features are superimposed in space and time? To approach this question, we obtained fMRI activity from the brains of three macaques viewing extended video clips containing a range of social and nonsocial content and compared the fMRI time courses to a family of feature models derived from the movie content. Starting with more than two dozen feature models extracted from each movie, we created functional maps based on features whose time courses were nearly orthogonal, focusing primarily on faces, motion content, and contrast level. Activity mapping using the face feature model readily yielded functional regions closely resembling face patches obtained using a block design in the same animals. Overall, the motion feature model dominated responses in nearly all visually driven areas, including the face patches as well as ventral visual areas V4, TEO, and TE. Control experiments presenting dynamic movies, whose content was free of animals, demonstrated that biological movement critically contributed to the predominance of motion in fMRI responses. These results highlight the value of natural viewing paradigms for studying the brain's functional organization and also underscore the paramount contribution of magnocellular input to the ventral visual pathway during natural vision.
Collapse
Affiliation(s)
- Brian E Russ
- Section on Cognitive Neurophysiology and Imaging, Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD20892, United States.
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD20892, United States; Neurophysiology Imaging Facility, National Institute of Mental Health, National Institute of Neurological Disorders and Stroke, National Eye Institute, National Institutes of Health, Bethesda, MD20892, United States
| |
Collapse
|
45
|
Contrasting specializations for facial motion within the macaque face-processing system. Curr Biol 2015; 25:261-266. [PMID: 25578903 DOI: 10.1016/j.cub.2014.11.038] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 10/29/2014] [Accepted: 11/14/2014] [Indexed: 11/20/2022]
Abstract
Facial motion transmits rich and ethologically vital information, but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain, and facial motion activates these patches and surrounding areas. Yet, it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery's organization might be. To address these questions, we used fMRI to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system.
Collapse
|
46
|
Reinl M, Bartels A. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics. Neuroimage 2014; 102 Pt 2:407-15. [PMID: 25132020 DOI: 10.1016/j.neuroimage.2014.08.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/25/2014] [Accepted: 08/04/2014] [Indexed: 12/16/2022] Open
Abstract
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory.
Collapse
Affiliation(s)
- Maren Reinl
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
47
|
Miki K, Kakigi R. Magnetoencephalographic study on facial movements. Front Hum Neurosci 2014; 8:550. [PMID: 25120453 PMCID: PMC4114328 DOI: 10.3389/fnhum.2014.00550] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2014] [Accepted: 07/07/2014] [Indexed: 11/15/2022] Open
Abstract
In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by the facial contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin) and features (eyes, nose, and mouth) on processing for static and dynamic face perception. Our results showed the following: (1) In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features; and (2) In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.
Collapse
Affiliation(s)
- Kensaku Miki
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI) Hayama, Kanagawa, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI) Hayama, Kanagawa, Japan
| |
Collapse
|
48
|
Abstract
The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.
Collapse
|
49
|
Morin EL, Hadj-Bouziane F, Stokes M, Ungerleider LG, Bell AH. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex. Cereb Cortex 2014; 25:3036-45. [PMID: 24836688 DOI: 10.1093/cercor/bhu099] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network.
Collapse
Affiliation(s)
- Elyse L Morin
- Laboratory of Brain and Cognition, NIMH/NIH, Bethesda, MD, USA
| | | | - Mark Stokes
- Oxford Centre for Human Brain Activity Department of Experimental Psychology, University of Oxford, Oxford, UK
| | | | - Andrew H Bell
- Laboratory of Brain and Cognition, NIMH/NIH, Bethesda, MD, USA
| |
Collapse
|
50
|
Representation of the material properties of objects in the visual cortex of nonhuman primates. J Neurosci 2014; 34:2660-73. [PMID: 24523555 DOI: 10.1523/jneurosci.2593-13.2014] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Information about the material from which objects are made provide rich and useful clues that enable us to categorize and identify those objects, know their state (e.g., ripeness of fruits), and properly act on them. However, despite its importance, little is known about the neural processes that underlie material perception in nonhuman primates. Here we conducted an fMRI experiment in awake macaque monkeys to explore how information about various real-world materials is represented in the visual areas of monkeys, how these neural representations correlate with perceptual material properties, and how they correspond to those in human visual areas that have been studied previously. Using a machine-learning technique, the representation in each visual area was read out from multivoxel patterns of regional activity elicited in response to images of nine real-world material categories (metal, wood, fur, etc.). The congruence of the neural representations with either a measure of low-level image properties, such as spatial frequency content, or with the visuotactile properties of materials, such as roughness, hardness, and warmness, were tested. We show that monkey V1 shares a common representation with human early visual areas reflecting low-level image properties. By contrast, monkey V4 and the posterior inferior temporal cortex represent the visuotactile properties of material, as in human ventral higher visual areas, although there were some interspecies differences in the representational structures. We suggest that, in monkeys, V4 and the posterior inferior temporal cortex are important stages for constructing information about the material properties of objects from their low-level image features.
Collapse
|