1
|
Garlichs A, Lustig M, Gamer M, Blank H. Expectations guide predictive eye movements and information sampling during face recognition. iScience 2024; 27:110920. [PMID: 39351204 PMCID: PMC11439840 DOI: 10.1016/j.isci.2024.110920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/21/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024] Open
Abstract
Context information has a crucial impact on our ability to recognize faces. Theoretical frameworks of predictive processing suggest that predictions derived from context guide sampling of sensory evidence at informative locations. However, it is unclear how expectations influence visual information sampling during face perception. To investigate the effects of expectations on eye movements during face anticipation and recognition, we conducted two eye-tracking experiments (n = 34, each) using cued face morphs containing expected and unexpected facial features, and clear expected and unexpected faces. Participants performed predictive saccades toward expected facial features and fixated expected more often and longer than unexpected features. In face morphs, expected features attracted early eye movements, followed by unexpected features, indicating that top-down as well as bottom-up information drives face sampling. Our results provide compelling evidence that expectations influence face processing by guiding predictive and early eye movements toward anticipated informative locations, supporting predictive processing.
Collapse
Affiliation(s)
- Annika Garlichs
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Mark Lustig
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Psychology, University of Hamburg, Hamburg, Germany
| | - Matthias Gamer
- Department of Psychology, University of Würzburg, Würzburg, Germany
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Hamburg Brain School, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Predictive Cognition, Research Center One Health Ruhr of the University Alliance Ruhr, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
2
|
Belliard S, Merck C. Is semantic dementia an outdated entity? Cortex 2024; 180:64-77. [PMID: 39378711 DOI: 10.1016/j.cortex.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/05/2024] [Accepted: 09/05/2024] [Indexed: 10/10/2024]
Abstract
Does it still make clinical sense to talk about semantic dementia? For more than 10 years, some researchers and clinicians have highlighted the need for new diagnostic criteria, arguing for this entity either to be redefined or, more recently, to be divided into two partially distinct entities, each with its own supposed characteristics, namely the semantic variant primary progressive aphasia and the semantic behavioral variant frontotemporal dementia. Why such a shift? Is it no longer appropriate to talk about semantic dementia? Is it really useful to divide the concept of semantic dementia into verbal and socioemotional semantic subcomponents? Does this proposal have any clinical merit or does it solely reflect theoretical considerations? To shed light on these questions, the purpose of the present review was to explore theoretical considerations on the nature of the knowledge that is disturbed in this disease which might justify such terminological changes.
Collapse
Affiliation(s)
- Serge Belliard
- Service de neurologie, CMRR Haute Bretagne, CHU Pontchaillou, 35000 Rennes, France; Normandie Univ, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, 14000 Caen, France.
| | - Catherine Merck
- Service de neurologie, CMRR Haute Bretagne, CHU Pontchaillou, 35000 Rennes, France; Normandie Univ, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, 14000 Caen, France
| |
Collapse
|
3
|
Meng Y, Liang C, Chen W, Liu Z, Yang C, Hu J, Gao Z, Gao S. Neural basis of language familiarity effects on voice recognition: An fNIRS study. Cortex 2024; 176:1-10. [PMID: 38723449 DOI: 10.1016/j.cortex.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/18/2024] [Accepted: 04/10/2024] [Indexed: 06/11/2024]
Abstract
Recognizing talkers' identity via speech is an important social skill in interpersonal interaction. Behavioral evidence has shown that listeners can identify better the voices of their native language than those of a non-native language, which is known as the language familiarity effect (LFE). However, its underlying neural mechanisms remain unclear. This study therefore investigated how the LFE occurs at the neural level by employing functional near-infrared spectroscopy (fNIRS). Late unbalanced bilinguals were first asked to learn to associate strangers' voices with their identities and then tested for recognizing the talkers' identities based on their voices speaking a language either highly familiar (i.e., native language Chinese), or moderately familiar (i.e., second language English), or completely unfamiliar (i.e., Ewe) to participants. Participants identified talkers the most accurately in Chinese and the least accurately in Ewe. Talker identification was quicker in Chinese than in English and Ewe but reaction time did not differ between the two non-native languages. At the neural level, recognizing voices speaking Chinese relative to English/Ewe produced less activity in the inferior frontal gyrus, precentral/postcentral gyrus, supramarginal gyrus, and superior temporal sulcus/gyrus while no difference was found between English and Ewe, indicating facilitation of voice identification by the automatic phonological encoding in the native language. These findings shed new light on the interrelations between language ability and voice recognition, revealing that the brain activation pattern of the LFE depends on the automaticity of language processing.
Collapse
Affiliation(s)
- Yuan Meng
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Chunyan Liang
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; Zhuojin Branch of Yandaojie Primary School, Chengdu, China
| | - Wenjing Chen
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhaoning Liu
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Chaoqing Yang
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiehui Hu
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhao Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China.
| | - Shan Gao
- School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China; The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, China.
| |
Collapse
|
4
|
Volfart A, Rossion B. The neuropsychological evaluation of face identity recognition. Neuropsychologia 2024; 198:108865. [PMID: 38522782 DOI: 10.1016/j.neuropsychologia.2024.108865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 03/14/2024] [Accepted: 03/18/2024] [Indexed: 03/26/2024]
Abstract
Facial identity recognition (FIR) is arguably the ultimate form of recognition for the adult human brain. Even if the term prosopagnosia is reserved for exceptionally rare brain-damaged cases with a category-specific abrupt loss of FIR at adulthood, subjective and objective impairments or difficulties of FIR are common in the neuropsychological population. Here we provide a critical overview of the evaluation of FIR both for clinicians and researchers in neuropsychology. FIR impairments occur following many causes that should be identified objectively by both general and specific, behavioral and neural examinations. We refute the commonly used dissociation between perceptual and memory deficits/tests for FIR, since even a task involving the discrimination of unfamiliar face images presented side-by-side relies on cortical memories of faces in the right-lateralized ventral occipito-temporal cortex. Another frequently encountered confusion is between specific deficits of the FIR function and a more general impairment of semantic memory (of people), the latter being most often encountered following anterior temporal lobe damage. Many computerized tests aimed at evaluating FIR have appeared over the last two decades, as reviewed here. However, despite undeniable strengths, they often suffer from ecological limitations, difficulties of instruction, as well as a lack of consideration for processing speed and qualitative information. Taking into account these issues, a recently developed behavioral test with natural images manipulating face familiarity, stimulus inversion, and correct response times as a key variable appears promising. The measurement of electroencephalographic (EEG) activity in the frequency domain from fast periodic visual stimulation also appears as a particularly promising tool to complete and enhance the neuropsychological assessment of FIR.
Collapse
Affiliation(s)
- Angélique Volfart
- School of Psychology and Counselling, Faculty of Health, Queensland University of Technology, Australia.
| | - Bruno Rossion
- Centre for Biomedical Technologies, Queensland University of Technology, Australia; Université de Lorraine, CNRS, IMoPA, F-54000, Nancy, France.
| |
Collapse
|
5
|
Rupp KM, Hect JL, Harford EE, Holt LL, Ghuman AS, Abel TJ. A hierarchy of processing complexity and timescales for natural sounds in human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.24.595822. [PMID: 38826304 PMCID: PMC11142240 DOI: 10.1101/2024.05.24.595822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Efficient behavior is supported by humans' ability to rapidly recognize acoustically distinct sounds as members of a common category. Within auditory cortex, there are critical unanswered questions regarding the organization and dynamics of sound categorization. Here, we performed intracerebral recordings in the context of epilepsy surgery as 20 patient-participants listened to natural sounds. We built encoding models to predict neural responses using features of these sounds extracted from different layers within a sound-categorization deep neural network (DNN). This approach yielded highly accurate models of neural responses throughout auditory cortex. The complexity of a cortical site's representation (measured by the depth of the DNN layer that produced the best model) was closely related to its anatomical location, with shallow, middle, and deep layers of the DNN associated with core (primary auditory cortex), lateral belt, and parabelt regions, respectively. Smoothly varying gradients of representational complexity also existed within these regions, with complexity increasing along a posteromedial-to-anterolateral direction in core and lateral belt, and along posterior-to-anterior and dorsal-to-ventral dimensions in parabelt. When we estimated the time window over which each recording site integrates information, we found shorter integration windows in core relative to lateral belt and parabelt. Lastly, we found a relationship between the length of the integration window and the complexity of information processing within core (but not lateral belt or parabelt). These findings suggest hierarchies of timescales and processing complexity, and their interrelationship, represent a functional organizational principle of the auditory stream that underlies our perception of complex, abstract auditory information.
Collapse
Affiliation(s)
- Kyle M. Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Emily E. Harford
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Lori L. Holt
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Avniel Singh Ghuman
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
6
|
Garlichs A, Blank H. Prediction error processing and sharpening of expected information across the face-processing hierarchy. Nat Commun 2024; 15:3407. [PMID: 38649694 PMCID: PMC11035707 DOI: 10.1038/s41467-024-47749-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 04/10/2024] [Indexed: 04/25/2024] Open
Abstract
The perception and neural processing of sensory information are strongly influenced by prior expectations. The integration of prior and sensory information can manifest through distinct underlying mechanisms: focusing on unexpected input, denoted as prediction error (PE) processing, or amplifying anticipated information via sharpened representation. In this study, we employed computational modeling using deep neural networks combined with representational similarity analyses of fMRI data to investigate these two processes during face perception. Participants were cued to see face images, some generated by morphing two faces, leading to ambiguity in face identity. We show that expected faces were identified faster and perception of ambiguous faces was shifted towards priors. Multivariate analyses uncovered evidence for PE processing across and beyond the face-processing hierarchy from the occipital face area (OFA), via the fusiform face area, to the anterior temporal lobe, and suggest sharpened representations in the OFA. Our findings support the proposition that the brain represents faces grounded in prior expectations.
Collapse
Affiliation(s)
- Annika Garlichs
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany.
| | - Helen Blank
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany.
| |
Collapse
|
7
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
8
|
Gainotti G. Human Recognition: The Utilization of Face, Voice, Name and Interactions-An Extended Editorial. Brain Sci 2024; 14:345. [PMID: 38671996 PMCID: PMC11048321 DOI: 10.3390/brainsci14040345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024] Open
Abstract
The many stimulating contributions to this Special Issue of Brain Science focused on some basic issues of particular interest in current research, with emphasis on human recognition using faces, voices, and names [...].
Collapse
Affiliation(s)
- Guido Gainotti
- Institute of Neurology, Università Cattolica del Sacro Cuore, Fondazione Policlinico A. Gemelli, Istituto di Ricovero e Cura a Carattere Scientifico, 00168 Rome, Italy
| |
Collapse
|
9
|
Castro-Laguardia AM, Ontivero-Ortega M, Morato C, Lucas I, Vila J, Bobes León MA, Muñoz PG. Familiarity Processing through Faces and Names: Insights from Multivoxel Pattern Analysis. Brain Sci 2023; 14:39. [PMID: 38248254 PMCID: PMC10813351 DOI: 10.3390/brainsci14010039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/24/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
The way our brain processes personal familiarity is still debatable. We used searchlight multivoxel pattern analysis (MVPA) to identify areas where local fMRI patterns could contribute to familiarity detection for both faces and name categories. Significantly, we identified cortical areas in frontal, temporal, cingulate, and insular areas, where it is possible to accurately cross-classify familiar stimuli from one category using a classifier trained with the stimulus from the other (i.e., abstract familiarity) based on local fMRI patterns. We also discovered several areas in the fusiform gyrus, frontal, and temporal regions-primarily lateralized to the right hemisphere-supporting the classification of familiar faces but failing to do so for names. Also, responses to familiar names (compared to unfamiliar names) consistently showed less activation strength than responses to familiar faces (compared to unfamiliar faces). The results evinced a set of abstract familiarity areas (independent of the stimulus type) and regions specifically related only to face familiarity, contributing to recognizing familiar individuals.
Collapse
Affiliation(s)
- Ana Maria Castro-Laguardia
- Department of Cognitive and Social Neuroscience, Cuban Center for Neurosciences (CNEURO), Rotonda La Muñeca, 15202 Avenida 25, La Habana 11600, Cuba; (A.M.C.-L.)
| | - Marlis Ontivero-Ortega
- Department of Cognitive and Social Neuroscience, Cuban Center for Neurosciences (CNEURO), Rotonda La Muñeca, 15202 Avenida 25, La Habana 11600, Cuba; (A.M.C.-L.)
| | - Cristina Morato
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada (UGR), Avda. del Hospicio, s/n P.C., 18010 Granada, Spain (J.V.)
| | - Ignacio Lucas
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada (UGR), Avda. del Hospicio, s/n P.C., 18010 Granada, Spain (J.V.)
| | - Jaime Vila
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada (UGR), Avda. del Hospicio, s/n P.C., 18010 Granada, Spain (J.V.)
| | - María Antonieta Bobes León
- Department of Cognitive and Social Neuroscience, Cuban Center for Neurosciences (CNEURO), Rotonda La Muñeca, 15202 Avenida 25, La Habana 11600, Cuba; (A.M.C.-L.)
| | - Pedro Guerra Muñoz
- Mind, Brain and Behavior Research Center (CIMCYC), University of Granada (UGR), Avda. del Hospicio, s/n P.C., 18010 Granada, Spain (J.V.)
| |
Collapse
|
10
|
Belo J, Clerc M, Schön D. The effect of familiarity on neural tracking of music stimuli is modulated by mind wandering. AIMS Neurosci 2023; 10:319-331. [PMID: 38188009 PMCID: PMC10767062 DOI: 10.3934/neuroscience.2023025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/29/2023] [Accepted: 11/06/2023] [Indexed: 01/09/2024] Open
Abstract
One way to investigate the cortical tracking of continuous auditory stimuli is to use the stimulus reconstruction approach. However, the cognitive and behavioral factors impacting this cortical representation remain largely overlooked. Two possible candidates are familiarity with the stimulus and the ability to resist internal distractions. To explore the possible impacts of these two factors on the cortical representation of natural music stimuli, forty-one participants listened to monodic natural music stimuli while we recorded their neural activity. Using the stimulus reconstruction approach and linear mixed models, we found that familiarity positively impacted the reconstruction accuracy of music stimuli and that this effect of familiarity was modulated by mind wandering.
Collapse
Affiliation(s)
- Joan Belo
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Maureen Clerc
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
| | - Daniele Schön
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
- Institute for Language, Communication, and the Brain, Aix-en-Provence, France
| |
Collapse
|
11
|
Ma Y, Yu K, Yin S, Li L, Li P, Wang R. Attention Modulates the Role of Speakers' Voice Identity and Linguistic Information in Spoken Word Processing: Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1678-1693. [PMID: 37071787 DOI: 10.1044/2023_jslhr-22-00420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE The human voice usually contains two types of information: linguistic and identity information. However, whether and how linguistic information interacts with identity information remains controversial. This study aimed to explore the processing of identity and linguistic information during spoken word processing by considering the modulation of attention. METHOD We conducted two event-related potentials (ERPs) experiments in the study. Different speakers (self, friend, and unfamiliar speakers) and emotional words (positive, negative, and neutral words) were used to manipulate the identity and linguistic information. With the manipulation, Experiment 1 explored the identity and linguistic information processing with a word decision task that requires participants' explicit attention to linguistic information. Experiment 2 further investigated the issue with a passive oddball paradigm that requires rare attention to either the identity or linguistic information. RESULTS Experiment 1 revealed an interaction among speaker, word type, and hemisphere in N400 amplitudes but not in N100 and P200, which suggests that identity information interacted with linguistic information at the later stage of spoken word processing. The mismatch negativity results of Experiment 2 showed no significant interaction between speaker and word pair, which indicates that identity and linguistic information were processed independently. CONCLUSIONS The identity information would interact with linguistic information during spoken word processing. However, the interaction was modulated by the task demands on attention involvement. We propose an attention-modulated explanation to explain the mechanism underlying identity and linguistic information processing. Implications of our findings are discussed in light of the integration and independence theories.
Collapse
Affiliation(s)
- Yunxiao Ma
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Keke Yu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Shuqi Yin
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Li Li
- The Key Laboratory of Chinese Learning and International Promotion, and College of International Culture, South China Normal University, Guangzhou, China
| | - Ping Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ruiming Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
12
|
Zäske R, Kaufmann JM, Schweinberger SR. Neural Correlates of Voice Learning with Distinctive and Non-Distinctive Faces. Brain Sci 2023; 13:637. [PMID: 37190602 PMCID: PMC10136676 DOI: 10.3390/brainsci13040637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Recognizing people from their voices may be facilitated by a voice's distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito-temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition.
Collapse
Affiliation(s)
- Romi Zäske
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystraße 3, 07743 Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Voice Research Unit, Friedrich Schiller University of Jena, Leutragraben 1, 07743 Jena, Germany
| | - Jürgen M. Kaufmann
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Voice Research Unit, Friedrich Schiller University of Jena, Leutragraben 1, 07743 Jena, Germany
| |
Collapse
|
13
|
Pang W, Zhou W, Ruan Y, Zhang L, Shu H, Zhang Y, Zhang Y. Visual Deprivation Alters Functional Connectivity of Neural Networks for Voice Recognition: A Resting-State fMRI Study. Brain Sci 2023; 13:brainsci13040636. [PMID: 37190601 DOI: 10.3390/brainsci13040636] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/29/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Humans recognize one another by identifying their voices and faces. For sighted people, the integration of voice and face signals in corresponding brain networks plays an important role in facilitating the process. However, individuals with vision loss primarily resort to voice cues to recognize a person's identity. It remains unclear how the neural systems for voice recognition reorganize in the blind. In the present study, we collected behavioral and resting-state fMRI data from 20 early blind (5 females; mean age = 22.6 years) and 22 sighted control (7 females; mean age = 23.7 years) individuals. We aimed to investigate the alterations in the resting-state functional connectivity (FC) among the voice- and face-sensitive areas in blind subjects in comparison with controls. We found that the intranetwork connections among voice-sensitive areas, including amygdala-posterior "temporal voice areas" (TVAp), amygdala-anterior "temporal voice areas" (TVAa), and amygdala-inferior frontal gyrus (IFG) were enhanced in the early blind. The blind group also showed increased FCs of "fusiform face area" (FFA)-IFG and "occipital face area" (OFA)-IFG but decreased FCs between the face-sensitive areas (i.e., FFA and OFA) and TVAa. Moreover, the voice-recognition accuracy was positively related to the strength of TVAp-FFA in the sighted, and the strength of amygdala-FFA in the blind. These findings indicate that visual deprivation shapes functional connectivity by increasing the intranetwork connections among voice-sensitive areas while decreasing the internetwork connections between the voice- and face-sensitive areas. Moreover, the face-sensitive areas are still involved in the voice-recognition process in blind individuals through pathways such as the subcortical-occipital or occipitofrontal connections, which may benefit the visually impaired greatly during voice processing.
Collapse
Affiliation(s)
- Wenbin Pang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Zhou
- Beijing Key Lab of Learning and Cognition, School of Psychology, Capital Normal University, Beijing 100048, China
| | - Yufang Ruan
- School of Communication Sciences and Disorders, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC H3A 1G1, Canada
- Centre for Research on Brain, Language and Music, Montréal, QC H3A 1G1, Canada
| | - Linjun Zhang
- School of Chinese as a Second Language, Peking University, Beijing 100871, China
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, The University of Minnesota, Minneapolis, MN 55455, USA
| | - Yumei Zhang
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Department of Rehabilitation, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| |
Collapse
|
14
|
Which components of famous people recognition are lateralized? A study of face, voice and name recognition disorders in patients with neoplastic or degenerative damage of the right or left anterior temporal lobes. Neuropsychologia 2023; 181:108490. [PMID: 36693520 DOI: 10.1016/j.neuropsychologia.2023.108490] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 01/16/2023] [Accepted: 01/16/2023] [Indexed: 01/22/2023]
Abstract
We administered to large groups of patients with neoplastic or degenerative damage affecting the right or left ATL, the 'Famous People Recognition Battery' (FPRB), in which subjects are required to recognize the same 40 famous people through their faces, voices and names, to clarify which components of famous people recognition are lateralized. At the familiarity level, we found, as expected, a dissociation between a greater impairment of patients with right ATL lesions on the non-verbal (face and voice) recognition modalities and of those with left ATL lesions on name familiarity. Equally expected were results obtained at the naming level, because the worse naming scores for faces and voices were observed in left-sided patients. Less foregone were, for two reasons, results obtained at the semantic level. First, no difference was found between the two hemispheric groups when scores obtained on the verbal (name) and non-verbal (face and voice) recognition modalities were account for. Second, the face and voice recognition modalities showed a different degree of right lateralization. All groups of patients showed, indeed, both at the familiarity and at the semantic level, a greater difficulty in the recognition of voices regarding faces, but this difference reached significance only in patients with right ATL lesions, suggesting a greater right lateralization of the more complex task of voice recognition. A model aiming to explain the greater right lateralization of the more perceptually demanding voice modality of person recognition is proposed.
Collapse
|
15
|
Blank H, Alink A, Büchel C. Multivariate functional neuroimaging analyses reveal that strength-dependent face expectations are represented in higher-level face-identity areas. Commun Biol 2023; 6:135. [PMID: 36725984 PMCID: PMC9892564 DOI: 10.1038/s42003-023-04508-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023] Open
Abstract
Perception is an active inference in which prior expectations are combined with sensory input. It is still unclear how the strength of prior expectations is represented in the human brain. The strength, or precision, of a prior could be represented with its content, potentially in higher-level sensory areas. We used multivariate analyses of functional resonance imaging data to test whether expectation strength is represented together with the expected face in high-level face-sensitive regions. Participants were trained to associate images of scenes with subsequently presented images of different faces. Each scene predicted three faces, each with either low, intermediate, or high probability. We found that anticipation enhances the similarity of response patterns in the face-sensitive anterior temporal lobe to response patterns specifically associated with the image of the expected face. In contrast, during face presentation, activity increased for unexpected faces in a typical prediction error network, containing areas such as the caudate and the insula. Our findings show that strength-dependent face expectations are represented in higher-level face-identity areas, supporting hierarchical theories of predictive processing according to which higher-level sensory regions represent weighted priors.
Collapse
Affiliation(s)
- Helen Blank
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Arjen Alink
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Christian Büchel
- grid.13648.380000 0001 2180 3484Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
16
|
Karlsson T, Schaefer H, Barton JJS, Corrow SL. Effects of Voice and Biographic Data on Face Encoding. Brain Sci 2023; 13:brainsci13010148. [PMID: 36672128 PMCID: PMC9857090 DOI: 10.3390/brainsci13010148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/05/2023] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
There are various perceptual and informational cues for recognizing people. How these interact in the recognition process is of interest. Our goal was to determine if the encoding of faces was enhanced by the concurrent presence of a voice, biographic data, or both. Using a between-subject design, four groups of 10 subjects learned the identities of 24 faces seen in video-clips. Half of the faces were seen only with their names, while the other half had additional information. For the first group this was the person's voice, for the second, it was biographic data, and for the third, both voice and biographic data. In a fourth control group, the additional information was the voice of a generic narrator relating non-biographic information. In the retrieval phase, subjects performed a familiarity task and then a face-to-name identification task with dynamic faces alone. Our results consistently showed no benefit to face encoding with additional information, for either the familiarity or identification task. Tests for equivalency indicated that facilitative effects of a voice or biographic data on face encoding were not likely to exceed 3% in accuracy. We conclude that face encoding is minimally influenced by cross-modal information from voices or biographic data.
Collapse
Affiliation(s)
- Thilda Karlsson
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Faculty of Medicine, Linköping University, 582 25 Linköping, Sweden
| | - Heidi Schaefer
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
| | - Jason J. S. Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Correspondence: ; Tel.: +604-875-4339; Fax: +604-875-4302
| | - Sherryse L. Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC V5Z 3N9, Canada
- Department of Psychology, Bethel University, St. Paul, MN 55112, USA
| |
Collapse
|
17
|
Sun Y, Ming L, Sun J, Guo F, Li Q, Hu X. Brain mechanism of unfamiliar and familiar voice processing: an activation likelihood estimation meta-analysis. PeerJ 2023; 11:e14976. [PMID: 36935917 PMCID: PMC10019337 DOI: 10.7717/peerj.14976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 02/08/2023] [Indexed: 03/14/2023] Open
Abstract
Interpersonal communication through vocal information is very important for human society. During verbal interactions, our vocal cord vibrations convey important information regarding voice identity, which allows us to decide how to respond to speakers (e.g., neither greeting a stranger too warmly or speaking too coldly to a friend). Numerous neural studies have shown that identifying familiar and unfamiliar voices may rely on different neural bases. However, the mechanism underlying voice identification of individuals of varying familiarity has not been determined due to vague definitions, confusion of terms, and differences in task design. To address this issue, the present study first categorized three kinds of voice identity processing (perception, recognition and identification) from speakers with different degrees of familiarity. We defined voice identity perception as passively listening to a voice or determining if the voice was human, voice identity recognition as determining if the sound heard was acoustically familiar, and voice identity identification as ascertaining whether a voice is associated with a name or face. Of these, voice identity perception involves processing unfamiliar voices, and voice identity recognition and identification involves processing familiar voices. According to these three definitions, we performed activation likelihood estimation (ALE) on 32 studies and revealed different brain mechanisms underlying processing of unfamiliar and familiar voice identities. The results were as follows: (1) familiar voice recognition/identification was supported by a network involving most regions in the temporal lobe, some regions in the frontal lobe, subcortical structures and regions around the marginal lobes; (2) the bilateral superior temporal gyrus was recruited for voice identity perception of an unfamiliar voice; (3) voice identity recognition/identification of familiar voices was more likely to activate the right frontal lobe than voice identity perception of unfamiliar voices, while voice identity perception of an unfamiliar voice was more likely to activate the bilateral temporal lobe and left frontal lobe; and (4) the bilateral superior temporal gyrus served as a shared neural basis of unfamiliar voice identity perception and familiar voice identity recognition/identification. In general, the results of the current study address gaps in the literature, provide clear definitions of concepts, and indicate brain mechanisms for subsequent investigations.
Collapse
|
18
|
Fransson S, Corrow S, Yeung S, Schaefer H, Barton JJS. Effects of Faces and Voices on the Encoding of Biographic Information. Brain Sci 2022; 12:brainsci12121716. [PMID: 36552175 PMCID: PMC9775626 DOI: 10.3390/brainsci12121716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/10/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
There are multiple forms of knowledge about people. Whether diverse person-related data interact is of interest regarding the more general issue of integration of multi-source information about the world. Our goal was to examine whether perception of a person's face or voice enhanced the encoding of their biographic data. We performed three experiments. In the first experiment, subjects learned the biographic data of a character with or without a video clip of their face. In the second experiment, they learned the character's data with an audio clip of either a generic narrator's voice or the character's voice relating the same biographic information. In the third experiment, an audiovisual clip of both the face and voice of either a generic narrator or the character accompanied the learning of biographic data. After learning, a test phase presented biographic data alone, and subjects were tested first for familiarity and second for matching of biographic data to the name. The results showed equivalent learning of biographic data across all three experiments, and none showed evidence that a character's face or voice enhanced the learning of biographic information. We conclude that the simultaneous processing of perceptual representations of people may not modulate the encoding of biographic data.
Collapse
Affiliation(s)
- Sarah Fransson
- Faculty of Medicine, Linköping University, 581 83 Linköping, Sweden
| | - Sherryse Corrow
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
- Department of Psychology, Bethel University, St. Paul, MN 55112, USA
| | - Shanna Yeung
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
| | - Heidi Schaefer
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
| | - Jason J. S. Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vanacouver, BC V5Z 3N9, Canada
- Correspondence: ; Tel.: +1-604-875-4339; Fax: +1-604-875-4302
| |
Collapse
|
19
|
Schroeger A, Kaufmann JM, Zäske R, Kovács G, Klos T, Schweinberger SR. Atypical prosopagnosia following right hemispheric stroke: A 23-year follow-up study with M.T. Cogn Neuropsychol 2022; 39:196-207. [PMID: 36202621 DOI: 10.1080/02643294.2022.2119838] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Most findings on prosopagnosia to date suggest preserved voice recognition in prosopagnosia (except in cases with bilateral lesions). Here we report a follow-up examination on M.T., suffering from acquired prosopagnosia following a large unilateral right-hemispheric lesion in frontal, parietal, and anterior temporal areas excluding core ventral occipitotemporal face areas. Twenty-three years after initial testing we reassessed face and object recognition skills [Henke, K., Schweinberger, S. R., Grigo, A., Klos, T., & Sommer, W. (1998). Specificity of face recognition: Recognition of exemplars of non-face objects in prosopagnosia. Cortex, 34(2), 289-296]; [Schweinberger, S. R., Klos, T., & Sommer, W. (1995). Covert face recognition in prosopagnosia - A dissociable function? Cortex, 31(3), 517-529] and additionally studied voice recognition. Confirming the persistence of deficits, M.T. exhibited substantial impairments in famous face recognition and memory for learned faces, but preserved face matching and object recognition skills. Critically, he showed substantially impaired voice recognition skills. These findings are congruent with the ideas that (i) prosopagnosia after right anterior temporal lesions can persist over long periods > 20 years, and that (ii) such lesions can be associated with both facial and vocal deficits in person recognition.
Collapse
Affiliation(s)
- Anna Schroeger
- Department of Psychology, Faculty of Psychology and Sports Science, Justus Liebig University, Giessen, Germany.,Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University, Jena, Germany
| | - Jürgen M Kaufmann
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany
| | - Romi Zäske
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany
| | - Gyula Kovács
- DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany.,Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich Schiller University, Jena, Germany
| | | | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University, Jena, Germany.,DFG Research Unit Person Perception, Friedrich Schiller University, Jena, Germany
| |
Collapse
|
20
|
Schelinski S, Tabas A, von Kriegstein K. Altered processing of communication signals in the subcortical auditory sensory pathway in autism. Hum Brain Mapp 2022; 43:1955-1972. [PMID: 35037743 PMCID: PMC8933247 DOI: 10.1002/hbm.25766] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/24/2021] [Accepted: 12/19/2021] [Indexed: 12/17/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech-in-noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full-scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non-vocal sounds. Within the control group, the left and right IC responded more when recognising speech-in-noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Alejandro Tabas
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
21
|
Volfart A, Yan X, Maillard L, Colnat-Coulbois S, Hossu G, Rossion B, Jonas J. Intracerebral electrical stimulation of the right anterior fusiform gyrus impairs human face identity recognition. Neuroimage 2022; 250:118932. [PMID: 35085763 DOI: 10.1016/j.neuroimage.2022.118932] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 01/17/2022] [Accepted: 01/23/2022] [Indexed: 01/23/2023] Open
Abstract
Brain regions located between the right fusiform face area (FFA) in the middle fusiform gyrus and the temporal pole may play a critical role in human face identity recognition but their investigation is limited by a large signal drop-out in functional magnetic resonance imaging (fMRI). Here we report an original case who is suddenly unable to recognize the identity of faces when electrically stimulated on a focal location inside this intermediate region of the right anterior fusiform gyrus. The reliable transient identity recognition deficit occurs without any change of percept, even during nonverbal face tasks (i.e., pointing out the famous face picture among three options; matching pictures of unfamiliar or familiar faces for their identities), and without difficulty at recognizing visual objects or famous written names. The effective contact is associated with the largest frequency-tagged electrophysiological signals of face-selectivity and of familiar and unfamiliar face identity recognition. This extensive multimodal investigation points to the right anterior fusiform gyrus as a critical hub of the human cortical face network, between posterior ventral occipito-temporal face-selective regions directly connected to low-level visual cortex, the medial temporal lobe involved in generic memory encoding, and ventral anterior temporal lobe regions holding semantic associations to people's identity.
Collapse
Affiliation(s)
- Angélique Volfart
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; University of Louvain, Psychological Sciences Research Institute, B-1348 Louvain-La-Neuve, Belgium
| | - Xiaoqian Yan
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; University of Louvain, Psychological Sciences Research Institute, B-1348 Louvain-La-Neuve, Belgium; Stanford University, Department of Psychology, CA 94305 Stanford, USA
| | - Louis Maillard
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
| | - Sophie Colnat-Coulbois
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurochirurgie, F-54000 Nancy, France
| | - Gabriela Hossu
- Université de Lorraine, CHRU-Nancy, CIC-IT, F-54000 Nancy, France; Université de Lorraine, Inserm, IADI, F-54000 Nancy, France
| | - Bruno Rossion
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; University of Louvain, Psychological Sciences Research Institute, B-1348 Louvain-La-Neuve, Belgium; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
| | - Jacques Jonas
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France.
| |
Collapse
|
22
|
Spatially Adjacent Regions in Posterior Cingulate Cortex Represent Familiar Faces at Different Levels of Complexity. J Neurosci 2021; 41:9807-9826. [PMID: 34670848 PMCID: PMC8612644 DOI: 10.1523/jneurosci.1580-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 08/25/2021] [Accepted: 09/26/2021] [Indexed: 11/21/2022] Open
Abstract
Extensive research has shown that perceptual information of faces is processed in a network of hierarchically-organized areas within ventral temporal cortex. For familiar and famous faces, perceptual processing of faces is normally accompanied by extraction of semantic knowledge about the social status of persons. Semantic processing of familiar faces could entail progressive stages of information abstraction. However, the cortical mechanisms supporting multistage processing of familiar faces have not been characterized. Here, using an event-related fMRI experiment, familiar faces from four celebrity groups (actors, singers, politicians, and football players) and unfamiliar faces were presented to the human subjects (both males and females) while they were engaged in a face categorization task. We systematically explored the cortical representations for faces, familiar faces, subcategories of familiar faces, and familiar face identities using whole-brain univariate analysis, searchlight-based multivariate pattern analysis (MVPA), and functional connectivity analysis. Convergent evidence from all these analyses revealed a set of overlapping regions within posterior cingulate cortex (PCC) that contained decodable fMRI responses for representing different levels of semantic knowledge about familiar faces. Our results suggest a multistage pathway in PCC for processing semantic information of faces, analogous to the multistage pathway in ventral temporal cortex for processing perceptual information of faces.SIGNIFICANCE STATEMENT Recognizing familiar faces is an important component of social communications. Previous research has shown that a distributed network of brain areas is involved in processing the semantic information of familiar faces. However, it is not clear how different levels of semantic information are represented in the brain. Here, we evaluated the multivariate response patterns across the entire cortex to discover the areas that contain information for familiar faces, subcategories of familiar faces, and identities of familiar faces. The searchlight maps revealed that different levels of semantic information are represented in topographically adjacent areas within posterior cingulate cortex (PCC). The results suggest that semantic processing of faces is mediated through progressive stages of information abstraction in PCC.
Collapse
|
23
|
Contagiosité des comportements humains : la réplication du bâillement peut-elle nous éclairer ? ANNALES MEDICO-PSYCHOLOGIQUES 2021. [DOI: 10.1016/j.amp.2021.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
24
|
Iannotti GR, Orepic P, Brunet D, Koenig T, Alcoba-Banqueri S, Garin DFA, Schaller K, Blanke O, Michel CM. EEG Spatiotemporal Patterns Underlying Self-other Voice Discrimination. Cereb Cortex 2021; 32:1978-1992. [PMID: 34649280 PMCID: PMC9070353 DOI: 10.1093/cercor/bhab329] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 08/02/2021] [Accepted: 08/03/2021] [Indexed: 11/24/2022] Open
Abstract
There is growing evidence showing that the representation of the human “self” recruits special systems across different functions and modalities. Compared to self-face and self-body representations, few studies have investigated neural underpinnings specific to self-voice. Moreover, self-voice stimuli in those studies were consistently presented through air and lacking bone conduction, rendering the sound of self-voice stimuli different to the self-voice heard during natural speech. Here, we combined psychophysics, voice-morphing technology, and high-density EEG in order to identify the spatiotemporal patterns underlying self-other voice discrimination (SOVD) in a population of 26 healthy participants, both with air- and bone-conducted stimuli. We identified a self-voice-specific EEG topographic map occurring around 345 ms post-stimulus and activating a network involving insula, cingulate cortex, and medial temporal lobe structures. Occurrence of this map was modulated both with SOVD task performance and bone conduction. Specifically, the better participants performed at SOVD task, the less frequently they activated this network. In addition, the same network was recruited less frequently with bone conduction, which, accordingly, increased the SOVD task performance. This work could have an important clinical impact. Indeed, it reveals neural correlates of SOVD impairments, believed to account for auditory-verbal hallucinations, a common and highly distressing psychiatric symptom.
Collapse
Affiliation(s)
- Giannina Rita Iannotti
- Functional Brain Mapping Lab, Department of Fundamental Neurosciences, University of Geneva, 1202, Switzerland.,Department of Neurosurgery, University Hospitals of Geneva and Faculty of Medicine, University of Geneva, 1205, Switzerland
| | - Pavo Orepic
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), 1202, Switzerland
| | - Denis Brunet
- Functional Brain Mapping Lab, Department of Fundamental Neurosciences, University of Geneva, 1202, Switzerland.,CIBM Center for Biomedical Imaging, Lausanne and Geneva, 1015, Switzerland
| | - Thomas Koenig
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern 3000, Switzerland
| | - Sixto Alcoba-Banqueri
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), 1202, Switzerland
| | - Dorian F A Garin
- Department of Neurosurgery, University Hospitals of Geneva and Faculty of Medicine, University of Geneva, 1205, Switzerland
| | - Karl Schaller
- Department of Neurosurgery, University Hospitals of Geneva and Faculty of Medicine, University of Geneva, 1205, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), 1202, Switzerland
| | - Christoph M Michel
- Functional Brain Mapping Lab, Department of Fundamental Neurosciences, University of Geneva, 1202, Switzerland.,CIBM Center for Biomedical Imaging, Lausanne and Geneva, 1015, Switzerland
| |
Collapse
|
25
|
Takahashi Y, Murata S, Idei H, Tomita H, Yamashita Y. Neural network modeling of altered facial expression recognition in autism spectrum disorders based on predictive processing framework. Sci Rep 2021; 11:14684. [PMID: 34312400 PMCID: PMC8313712 DOI: 10.1038/s41598-021-94067-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 07/06/2021] [Indexed: 11/20/2022] Open
Abstract
The mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory. Predictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated. After the developmental learning process, emotional clusters emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional clustering in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition. These results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.
Collapse
Affiliation(s)
- Yuta Takahashi
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
- Department of Information Medicine, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo, 187-8502, Japan
| | - Shingo Murata
- Department of Electronics and Electrical Engineering, Faculty of Science and Technology, Keio University, Tokyo, Japan
| | - Hayato Idei
- Department of Intermedia Studies, Waseda University, Tokyo, Japan
| | - Hiroaki Tomita
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo, 187-8502, Japan.
| |
Collapse
|
26
|
The processing of intimately familiar and unfamiliar voices: Specific neural responses of speaker recognition and identification. PLoS One 2021; 16:e0250214. [PMID: 33861789 PMCID: PMC8051806 DOI: 10.1371/journal.pone.0250214] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 04/03/2021] [Indexed: 11/19/2022] Open
Abstract
Research has repeatedly shown that familiar and unfamiliar voices elicit different neural responses. But it has also been suggested that different neural correlates associate with the feeling of having heard a voice and knowing who the voice represents. The terminology used to designate these varying responses remains vague, creating a degree of confusion in the literature. Additionally, terms serving to designate tasks of voice discrimination, voice recognition, and speaker identification are often inconsistent creating further ambiguities. The present study used event-related potentials (ERPs) to clarify the difference between responses to 1) unknown voices, 2) trained-to-familiar voices as speech stimuli are repeatedly presented, and 3) intimately familiar voices. In an experiment, 13 participants listened to repeated utterances recorded from 12 speakers. Only one of the 12 voices was intimately familiar to a participant, whereas the remaining 11 voices were unfamiliar. The frequency of presentation of these 11 unfamiliar voices varied with only one being frequently presented (the trained-to-familiar voice). ERP analyses revealed different responses for intimately familiar and unfamiliar voices in two distinct time windows (P2 between 200-250 ms and a late positive component, LPC, between 450-850 ms post-onset) with late responses occurring only for intimately familiar voices. The LPC present sustained shifts, and short-time ERP components appear to reflect an early recognition stage. The trained voice equally elicited distinct responses, compared to rarely heard voices, but these occurred in a third time window (N250 between 300-350 ms post-onset). Overall, the timing of responses suggests that the processing of intimately familiar voices operates in two distinct steps of voice recognition, marked by a P2 on right centro-frontal sites, and speaker identification marked by an LPC component. The recognition of frequently heard voices entails an independent recognition process marked by a differential N250. Based on the present results and previous observations, it is proposed that there is a need to distinguish between processes of voice "recognition" and "identification". The present study also specifies test conditions serving to reveal this distinction in neural responses, one of which bears on the length of speech stimuli given the late responses associated with voice identification.
Collapse
|
27
|
Roswandowitz C, Swanborough H, Frühholz S. Categorizing human vocal signals depends on an integrated auditory-frontal cortical network. Hum Brain Mapp 2021; 42:1503-1517. [PMID: 33615612 PMCID: PMC7927295 DOI: 10.1002/hbm.25309] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 11/20/2020] [Accepted: 11/25/2020] [Indexed: 11/30/2022] Open
Abstract
Voice signals are relevant for auditory communication and suggested to be processed in dedicated auditory cortex (AC) regions. While recent reports highlighted an additional role of the inferior frontal cortex (IFC), a detailed description of the integrated functioning of the AC-IFC network and its task relevance for voice processing is missing. Using neuroimaging, we tested sound categorization while human participants either focused on the higher-order vocal-sound dimension (voice task) or feature-based intensity dimension (loudness task) while listening to the same sound material. We found differential involvements of the AC and IFC depending on the task performed and whether the voice dimension was of task relevance or not. First, when comparing neural vocal-sound processing of our task-based with previously reported passive listening designs we observed highly similar cortical activations in the AC and IFC. Second, during task-based vocal-sound processing we observed voice-sensitive responses in the AC and IFC whereas intensity processing was restricted to distinct AC regions. Third, the IFC flexibly adapted to the vocal-sounds' task relevance, being only active when the voice dimension was task relevant. Forth and finally, connectivity modeling revealed that vocal signals independent of their task relevance provided significant input to bilateral AC. However, only when attention was on the voice dimension, we found significant modulations of auditory-frontal connections. Our findings suggest an integrated auditory-frontal network to be essential for behaviorally relevant vocal-sounds processing. The IFC seems to be an important hub of the extended voice network when representing higher-order vocal objects and guiding goal-directed behavior.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETH ZurichZurichSwitzerland
| | - Huw Swanborough
- Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETH ZurichZurichSwitzerland
| | - Sascha Frühholz
- Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center ZurichUniversity of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
28
|
Hosaka T, Kimura M, Yotsumoto Y. Neural representations of own-voice in the human auditory cortex. Sci Rep 2021; 11:591. [PMID: 33436798 PMCID: PMC7804419 DOI: 10.1038/s41598-020-80095-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 12/15/2020] [Indexed: 01/29/2023] Open
Abstract
We have a keen sensitivity when it comes to the perception of our own voices. We can detect not only the differences between ourselves and others, but also slight modifications of our own voices. Here, we examined the neural correlates underlying such sensitive perception of one's own voice. In the experiments, we modified the subjects' own voices by using five types of filters. The subjects rated the similarity of the presented voices to their own. We compared BOLD (Blood Oxygen Level Dependent) signals between the voices that subjects rated as least similar to their own voice and those they rated as most similar. The contrast revealed that the bilateral superior temporal gyrus exhibited greater activities while listening to the voice least similar to their own voice and lesser activation while listening to the voice most similar to their own. Our results suggest that the superior temporal gyrus is involved in neural sharpening for the own-voice. The lesser degree of activations observed by the voices that were similar to the own-voice indicates that these areas not only respond to the differences between self and others, but also respond to the finer details of own-voices.
Collapse
Affiliation(s)
- Taishi Hosaka
- grid.26999.3d0000 0001 2151 536XDepartment of Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Marino Kimura
- grid.26999.3d0000 0001 2151 536XDepartment of Life Sciences, The University of Tokyo, Tokyo, Japan
| | - Yuko Yotsumoto
- grid.26999.3d0000 0001 2151 536XDepartment of Life Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
29
|
Kovács G. Getting to Know Someone: Familiarity, Person Recognition, and Identification in the Human Brain. J Cogn Neurosci 2020; 32:2205-2225. [DOI: 10.1162/jocn_a_01627] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Abstract
In our everyday life, we continuously get to know people, dominantly through their faces. Several neuroscientific experiments showed that familiarization changes the behavioral processing and underlying neural representation of faces of others. Here, we propose a model of the process of how we actually get to know someone. First, the purely visual familiarization of unfamiliar faces occurs. Second, the accumulation of associated, nonsensory information refines person representation, and finally, one reaches a stage where the effortless identification of very well-known persons occurs. We offer here an overview of neuroimaging studies, first evaluating how and in what ways the processing of unfamiliar and familiar faces differs and, second, by analyzing the fMRI adaptation and multivariate pattern analysis results we estimate where identity-specific representation is found in the brain. The available neuroimaging data suggest that different aspects of the information emerge gradually as one gets more and more familiar with a person within the same network. We propose a novel model of familiarity and identity processing, where the differential activation of long-term memory and emotion processing areas is essential for correct identification.
Collapse
|
30
|
Tsantani M, Cook R. Normal recognition of famous voices in developmental prosopagnosia. Sci Rep 2020; 10:19757. [PMID: 33184411 PMCID: PMC7661722 DOI: 10.1038/s41598-020-76819-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 11/03/2020] [Indexed: 02/06/2023] Open
Abstract
Developmental prosopagnosia (DP) is a condition characterised by lifelong face recognition difficulties. Recent neuroimaging findings suggest that DP may be associated with aberrant structure and function in multimodal regions of cortex implicated in the processing of both facial and vocal identity. These findings suggest that both facial and vocal recognition may be impaired in DP. To test this possibility, we compared the performance of 22 DPs and a group of typical controls, on closely matched tasks that assessed famous face and famous voice recognition ability. As expected, the DPs showed severe impairment on the face recognition task, relative to typical controls. In contrast, however, the DPs and controls identified a similar number of voices. Despite evidence of interactions between facial and vocal processing, these findings suggest some degree of dissociation between the two processing pathways, whereby one can be impaired while the other develops typically. A possible explanation for this dissociation in DP could be that the deficit originates in the early perceptual encoding of face structure, rather than at later, post-perceptual stages of face identity processing, which may be more likely to involve interactions with other modalities.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
31
|
The Role of the Left and Right Anterior Temporal Poles in People Naming and Recognition. Neuroscience 2020; 440:175-185. [DOI: 10.1016/j.neuroscience.2020.05.040] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Revised: 05/21/2020] [Accepted: 05/23/2020] [Indexed: 01/27/2023]
|
32
|
Young AW, Frühholz S, Schweinberger SR. Face and Voice Perception: Understanding Commonalities and Differences. Trends Cogn Sci 2020; 24:398-410. [DOI: 10.1016/j.tics.2020.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 01/16/2020] [Accepted: 02/03/2020] [Indexed: 01/01/2023]
|
33
|
Prete G, Fabri M, Foschi N, Tommasi L. Voice gender categorization in the connected and disconnected hemispheres. Soc Neurosci 2020; 15:385-397. [PMID: 32130082 DOI: 10.1080/17470919.2020.1734654] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The role of the left and right hemispheres in processing the gender of voices is controversial, some evidence suggesting a bilateral involvement, some others suggesting a right-hemispheric superiority. We investigated this issue in a gender categorization task involving healthy participants and a male split-brain patient: female or male natural voices were presented in one ear during the simultaneous presentation of white noise in the other ear (dichotic listening paradigm). Results revealed faster responses by the healthy participants for stimuli presented in the left than in the right ear, although no asymmetries emerged between the two ears in the accuracy of both the patient and the control group. Healthy participants were also more accurate at categorizing female than male voices, and an opposite-gender bias emerged - at least in females - showing faster responses in categorizing voices of the opposite gender. The results support a bilateral hemispheric involvement in voice gender categorization, without asymmetries in the patient, but with a faster categorization when voices are directly presented to the right hemisphere in the healthy sample. Moreover, when the two hemispheres directly interact with one another, a faster categorization of voices of the opposite gender emerges, and it can be an evolutionary grounded bias.
Collapse
Affiliation(s)
- Giulia Prete
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara , Chieti, Italy
| | - Mara Fabri
- Department of Clinical and Experimental Medicine, Neuroscience and Cell Biology Section, Polytechnic University of Marche , Ancona, Italy
| | - Nicoletta Foschi
- Regional Epilepsy Center, Neurological Clinic, "Ospedali Riuniti" , Ancona, Italy
| | - Luca Tommasi
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara , Chieti, Italy
| |
Collapse
|
34
|
Borowiak K, Maguinness C, von Kriegstein K. Dorsal-movement and ventral-form regions are functionally connected during visual-speech recognition. Hum Brain Mapp 2020; 41:952-972. [PMID: 31749219 PMCID: PMC7267922 DOI: 10.1002/hbm.24852] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 09/03/2019] [Accepted: 10/21/2019] [Indexed: 01/17/2023] Open
Abstract
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal-movement and ventral-form visual cortex regions. Here, we explored, for the first time, whether similar dorsal-ventral interactions (assessed via functional connectivity), might also exist for visual-speech processing. We then examined whether altered dorsal-ventral connectivity is observed in adults with high-functioning autism spectrum disorder (ASD), a disorder associated with impaired visual-speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal-movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral-form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT-right OFA, left TVSA-left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face-to-face communication.
Collapse
Affiliation(s)
- Kamila Borowiak
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Berlin School of Mind and Brain, Humboldt University of BerlinBerlinGermany
| | - Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
35
|
Cecchetto C, Fischmeister FPS, Gorkiewicz S, Schuehly W, Bagga D, Parma V, Schöpf V. Human body odor increases familiarity for faces during encoding-retrieval task. Hum Brain Mapp 2020; 41:1904-1919. [PMID: 31904899 PMCID: PMC7268037 DOI: 10.1002/hbm.24920] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 10/31/2019] [Accepted: 12/29/2019] [Indexed: 01/27/2023] Open
Abstract
Odors can increase memory performance when presented as context during both encoding and retrieval phases. Since information from different sensory modalities is integrated into a unified conceptual knowledge, we hypothesize that the social information from body odors and faces would be integrated during encoding. The integration of such social information would enhance retrieval more so than when the encoding occurs in the context of common odors. To examine this hypothesis and to further explore the underlying neural correlates of this behavior, we have conducted a functional magnetic resonance imaging study in which participants performed an encoding‐retrieval memory task for faces during the presentation of common odor, body odor or clean air. At the behavioral level, results show that participants were less biased and faster in recognizing faces when presented in concomitance with the body odor compared to the common odor. At the neural level, the encoding of faces in the body odor condition, compared to common odor and clean air conditions, showed greater activation in areas related to associative memory (dorsolateral prefrontal cortex), odor perception and multisensory integration (orbitofrontal cortex). These results suggest that face and body odor information were integrated and as a result, participants were faster in recognizing previously presented material.
Collapse
Affiliation(s)
- Cinzia Cecchetto
- Institute of Psychology, University of Graz, Graz, Austria.,BioTechMed, Graz, Austria
| | | | | | | | - Deepika Bagga
- Institute of Psychology, University of Graz, Graz, Austria.,BioTechMed, Graz, Austria
| | - Valentina Parma
- Department of Psychology, Temple University, Philadelphia, Pennsylvania
| | - Veronika Schöpf
- Institute of Psychology, University of Graz, Graz, Austria.,BioTechMed, Graz, Austria.,Computational Imaging Research Lab (CIR), Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
36
|
Faces and voices in the brain: A modality-general person-identity representation in superior temporal sulcus. Neuroimage 2019; 201:116004. [DOI: 10.1016/j.neuroimage.2019.07.017] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 05/17/2019] [Accepted: 07/07/2019] [Indexed: 11/18/2022] Open
|
37
|
Jagiello R, Pomper U, Yoneya M, Zhao S, Chait M. Rapid Brain Responses to Familiar vs. Unfamiliar Music - an EEG and Pupillometry study. Sci Rep 2019; 9:15570. [PMID: 31666553 PMCID: PMC6821741 DOI: 10.1038/s41598-019-51759-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 10/07/2019] [Indexed: 12/17/2022] Open
Abstract
Human listeners exhibit marked sensitivity to familiar music, perhaps most readily revealed by popular “name that tune” games, in which listeners often succeed in recognizing a familiar song based on extremely brief presentation. In this work, we used electroencephalography (EEG) and pupillometry to reveal the temporal signatures of the brain processes that allow differentiation between a familiar, well liked, and unfamiliar piece of music. In contrast to previous work, which has quantified gradual changes in pupil diameter (the so-called “pupil dilation response”), here we focus on the occurrence of pupil dilation events. This approach is substantially more sensitive in the temporal domain and allowed us to tap early activity with the putative salience network. Participants (N = 10) passively listened to snippets (750 ms) of a familiar, personally relevant and, an acoustically matched, unfamiliar song, presented in random order. A group of control participants (N = 12), who were unfamiliar with all of the songs, was also tested. We reveal a rapid differentiation between snippets from familiar and unfamiliar songs: Pupil responses showed greater dilation rate to familiar music from 100–300 ms post-stimulus-onset, consistent with a faster activation of the autonomic salience network. Brain responses measured with EEG showed a later differentiation between familiar and unfamiliar music from 350 ms post onset. Remarkably, the cluster pattern identified in the EEG response is very similar to that commonly found in the classic old/new memory retrieval paradigms, suggesting that the recognition of brief, randomly presented, music snippets, draws on similar processes.
Collapse
Affiliation(s)
- Robert Jagiello
- Ear Institute, University College London, London, UK.,Institute of Cognitive and Evolutionary Anthropology, University of Oxford, Oxford, UK
| | - Ulrich Pomper
- Ear Institute, University College London, London, UK. .,Faculty of Psychology, University of Vienna, Vienna, Austria.
| | - Makoto Yoneya
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, 243-0198, Japan
| | - Sijia Zhao
- Ear Institute, University College London, London, UK
| | - Maria Chait
- Ear Institute, University College London, London, UK.
| |
Collapse
|
38
|
Wang Y, Huang H, Yang H, Xu J, Mo S, Lai H, Wu T, Zhang J. Influence of EEG References on N170 Component in Human Facial Recognition. Front Neurosci 2019; 13:705. [PMID: 31354414 PMCID: PMC6637847 DOI: 10.3389/fnins.2019.00705] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Accepted: 06/21/2019] [Indexed: 11/26/2022] Open
Abstract
The choice of the reference electrode scheme is an important step in event-related potential (ERP) analysis. In order to explore the optimal electroencephalogram reference electrode scheme for the ERP signal related to facial recognition, we investigated the influence of average reference (AR), mean mastoid reference (MM), and Reference Electrode Standardization Technique (REST) on the N170 component via statistical analysis, statistical parametric scalp mappings (SPSM) and source analysis. The statistical results showed that the choice of reference electrode scheme has little effect on N170 latency (p > 0.05), but has an significant impact on N170 amplitude (p < 0.05). ANOVA results show that, for the three references scheme, there was statistically significant difference between N170 latency and amplitude induced by the unfamiliar face and that induced by the scrambled face (p < 0.05). Specifically, the SPSM results show an anterior and a temporo-occipital distribution for AR and REST, whereas just anterior distribution shown for MM. However, the circumstantial evidence provided by source analysis is more consistent with SPSM of AR and REST, compared with that of MM. These results indicate that the experimental results under the AR and REST references are more objective and appropriate. Thus, it is more appropriate to use AR and REST reference scheme settings in future facial recognition experiments.
Collapse
Affiliation(s)
- Yi Wang
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Hua Huang
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Hao Yang
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Jian Xu
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Site Mo
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Hongyu Lai
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| | - Ting Wu
- Department of Magnetoencephalography, Nanjing Brain Hospital Affiliated to Nanjing Medical University, Nanjing, China
| | - Junpeng Zhang
- Department of Medical Information Engineering, College of Electrical Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
39
|
Pinheiro AP, Farinha-Fernandes A, Roberto MS, Kotz SA. Self-voice perception and its relationship with hallucination predisposition. Cogn Neuropsychiatry 2019; 24:237-255. [PMID: 31177920 DOI: 10.1080/13546805.2019.1621159] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Introduction: Auditory verbal hallucinations (AVH) are a core symptom of psychotic disorders such as schizophrenia but are also reported in 10-15% of the general population. Impairments in self-voice recognition are frequently reported in schizophrenia and associated with the severity of AVH, particularly when the self-voice has a negative quality. However, whether self-voice processing is also affected in nonclinical voice hearers remains to be specified. Methods: Thirty-five nonclinical participants varying in hallucination predisposition based on the Launay-Slade Hallucination Scale, listened to prerecorded words and vocalisations differing in identity (self/other) and emotional quality. In Experiment 1, participants indicated whether words were spoken in their own voice, another voice, or whether they were unsure (recognition task). They were also asked whether pairs of words/vocalisations were uttered by the same or by a different speaker (discrimination task). In Experiment 2, participants judged the emotional quality of the words/vocalisations. Results: In Experiment 1, hallucination predisposition affected voice discrimination and recognition, irrespective of stimulus valence. Hallucination predisposition did not affect the evaluation of the emotional valence of words/vocalisations (Experiment 2). Conclusions: These findings suggest that nonclinical participants with high HP experience altered voice identity processing, whereas HP does not affect the perception of vocal emotion. Specific alterations in self-voice perception in clinical and nonclinical voice hearers may establish a core feature of the psychosis continuum.
Collapse
Affiliation(s)
- Ana P Pinheiro
- a Faculdade de Psicologia, Universidade de Lisboa , Lisboa , Portugal
| | | | - Magda S Roberto
- a Faculdade de Psicologia, Universidade de Lisboa , Lisboa , Portugal
| | - Sonja A Kotz
- b Faculty of Psychology and Neuroscience, Maastricht University , Maastricht , Netherlands.,c Max Planck Institute for Human and Cognitive Sciences , Leipzig , Germany
| |
Collapse
|
40
|
Borghesani V, Narvid J, Battistella G, Shwe W, Watson C, Binney RJ, Sturm V, Miller Z, Mandelli ML, Miller B, Gorno-Tempini ML. "Looks familiar, but I do not know who she is": The role of the anterior right temporal lobe in famous face recognition. Cortex 2019; 115:72-85. [PMID: 30772608 PMCID: PMC6759326 DOI: 10.1016/j.cortex.2019.01.006] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 09/29/2018] [Accepted: 01/07/2019] [Indexed: 12/15/2022]
Abstract
Processing a famous face involves a cascade of steps including detecting the presence of a face, recognizing it as familiar, accessing semantic/biographical information about the person, and finally, if required, production of the proper name. Decades of neuropsychological and neuroimaging studies have identified a network of occipital and temporal brain regions ostensibly comprising the 'core' system for face processing. Recent research has also begun to elucidate upon an 'extended' network, including anterior temporal and frontal regions. However, there is disagreement about which brain areas are involved in each step, as many aspects of face processing occur automatically in healthy individuals and rarely dissociate in patients. Moreover, some common phenomena are not easily induced in an experimental setting, such as having a sense of familiarity without being able to recall who the person is. Patients with the semantic variant of Primary Progressive Aphasia (svPPA) often recognize a famous face as familiar, even when they cannot specifically recall the proper name or biographical details. In this study, we analyzed data from a large sample of 105 patients with neurodegenerative disorders, including 43 svPPA, to identify the neuroanatomical substrates of three different steps of famous face processing. Using voxel-based morphometry, we correlated whole-brain grey matter volumes with scores on three experimental tasks that targeted familiarity judgment, semantic/biographical information retrieval, and naming. Performance in naming and semantic association significantly correlates with grey matter volume in the left anterior temporal lobe, whereas familiarity judgment with integrity of the right anterior middle temporal gyrus. These findings shed light on the neuroanatomical substrates of key components of overt face processing, addressing issues of functional lateralization, and deepening our understanding of neural substrates of semantic knowledge.
Collapse
Affiliation(s)
- Valentina Borghesani
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA.
| | - Jared Narvid
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Giovanni Battistella
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA
| | - Wendy Shwe
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA
| | - Christa Watson
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA; Department of Neurology, Dyslexia Center, University of California, San Francisco, CA, USA
| | | | - Virginia Sturm
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA
| | - Zachary Miller
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA; Department of Neurology, Dyslexia Center, University of California, San Francisco, CA, USA
| | - Maria Luisa Mandelli
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA; Department of Neurology, Dyslexia Center, University of California, San Francisco, CA, USA
| | - Bruce Miller
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA
| | - Maria Luisa Gorno-Tempini
- Department of Neurology, Memory and Aging Center, University of California, San Francisco, CA, USA; Department of Neurology, Dyslexia Center, University of California, San Francisco, CA, USA
| |
Collapse
|
41
|
|
42
|
Vila J, Morato C, Lucas I, Guerra P, Castro-Laguardia AM, Bobes MA. The affective processing of loved familiar faces and names: Integrating fMRI and heart rate. PLoS One 2019; 14:e0216057. [PMID: 31039182 PMCID: PMC6490893 DOI: 10.1371/journal.pone.0216057] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Accepted: 04/12/2019] [Indexed: 01/19/2023] Open
Abstract
The neuroscientific study of love has been boosted by an extended corpus of research on face-identity recognition. However, few studies have compared the emotional mechanisms activated by loved faces and names and none have simultaneously examined fMRI and autonomic measures. The present study combined fMRI with the heart rate response when 21 participants (10 males) passively viewed the face or the written name of 4 loved people and 4 unknown people. The results showed accelerative patterns in heart rate, together with brain activations, which were significantly higher for loved people than for unknown people. Significant correlations were found between heart rate and brain activation in frontal areas, for faces, and in temporal areas, for names. The results are discussed in the context of previous studies using the same passive viewing procedure, highlighting the relevance of integrating peripheral and central measures in the scientific study of positive emotion and love.
Collapse
Affiliation(s)
- Jaime Vila
- Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
- * E-mail:
| | - Cristina Morato
- Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| | - Ignacio Lucas
- Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| | - Pedro Guerra
- Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain
| | | | | |
Collapse
|
43
|
Barton JJS, Albonico A, Susilo T, Duchaine B, Corrow SL. Object recognition in acquired and developmental prosopagnosia. Cogn Neuropsychol 2019; 36:54-84. [PMID: 30947609 DOI: 10.1080/02643294.2019.1593821] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Whether face and object recognition are dissociated in prosopagnosia continues to be debated: a recent review highlighted deficiencies in prior studies regarding the evidence for such a dissociation. Our goal was to study cohorts with acquired and developmental prosopagnosia with a complementary battery of tests of object recognition that address prior limitations, as well as evaluating for residual effects of object expertise. We studied 15 subjects with acquired and 12 subjects with developmental prosopagnosia on three tests: the Old/New Tests, the Cambridge Bicycle Memory Test, and the Expertise-adjusted Test of Car Recognition. Most subjects with developmental prosopagnosia were normal on the Old/New Tests: for acquired prosopagnosia, subjects with occipitotemporal lesions often showed impairments while those with anterior temporal lesions did not. Ten subjects showed a putative classical dissociation between the Cambridge Face and Bicycle Memory Tests, seven of whom had normal reaction times. Both developmental and acquired groups showed reduced car recognition on the expertise-adjusted test, though residual effects of expertise were still evident. Two subjects with developmental prosopagnosia met criteria for normal object recognition across all tests. We conclude that strong evidence for intact object recognition can be found in a few subjects but the majority show deficits, particularly those with the acquired form. Both acquired and developmental forms show residual but reduced object expertise effects.
Collapse
Affiliation(s)
- Jason J S Barton
- a Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology , University of British Columbia , Vancouver , Canada
| | - Andrea Albonico
- a Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology , University of British Columbia , Vancouver , Canada
| | - Tirta Susilo
- b School of Psychology , Victoria University of Wellington , Wellington , New Zealand
| | - Brad Duchaine
- c Department of Psychological and Brain Sciences , Dartmouth College , Hanover , NH , USA
| | - Sherryse L Corrow
- a Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology , University of British Columbia , Vancouver , Canada.,d Department of Psychology , Bethel University , Minneapolis , MN , USA
| |
Collapse
|
44
|
Schelinski S, von Kriegstein K. The Relation Between Vocal Pitch and Vocal Emotion Recognition Abilities in People with Autism Spectrum Disorder and Typical Development. J Autism Dev Disord 2019; 49:68-82. [PMID: 30022285 PMCID: PMC6331502 DOI: 10.1007/s10803-018-3681-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
We tested the relation between vocal emotion and vocal pitch perception abilities in adults with high-functioning autism spectrum disorder (ASD) and pairwise matched adults with typical development. The ASD group had impaired vocal but typical non-vocal pitch and vocal timbre perception abilities. The ASD group showed less accurate vocal emotion perception than the comparison group and vocal emotion perception abilities were correlated with traits and symptoms associated with ASD. Vocal pitch and vocal emotion perception abilities were significantly correlated in the comparison group only. Our results suggest that vocal emotion recognition difficulties in ASD might not only be based on difficulties with complex social tasks, but also on difficulties with processing of basic sensory features, such as vocal pitch.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- Technische Universität Dresden, Faculty of Psychology, Bamberger Straße 7, 01187 Dresden, Germany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
- Technische Universität Dresden, Faculty of Psychology, Bamberger Straße 7, 01187 Dresden, Germany
| |
Collapse
|
45
|
Schneider B, Heskje J, Bruss J, Tranel D, Belfi AM. The left temporal pole is a convergence region mediating the relation between names and semantic knowledge for unique entities: Further evidence from a "recognition-from-name" study in neurological patients. Cortex 2018; 109:14-24. [PMID: 30273798 PMCID: PMC6263857 DOI: 10.1016/j.cortex.2018.08.026] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 07/17/2018] [Accepted: 08/30/2018] [Indexed: 12/21/2022]
Abstract
Prior research has implicated the left temporal pole (LTP) as a critical region for naming semantically unique items, including famous faces, landmarks, and musical melodies. Most studies have used a confrontation naming paradigm, where a participant is presented with a stimulus and asked to retrieve its name. We have proposed previously that the LTP functions as a two-way, bidirectional convergence region brokering between conceptual knowledge and proper names for unique entities. Under this hypothesis, damage to the LTP should result in a "two way" impairment: (1) defective proper name retrieval when presented with a unique stimulus (as shown in prior work); and (2) defective concept retrieval when presented with a proper name. Here, we directly tested the second prediction using a "recognition-from-name" paradigm. Participants were patients with LTP damage, brain-damaged comparisons with damage outside the LTP, and healthy comparisons. Participants were presented with names of famous persons (e.g., "Marilyn Monroe"), landmarks (e.g., "Leaning Tower of Pisa"), or melodies (e.g., "Rudolph the Red-Nosed Reindeer") and were asked to provide conceptual knowledge about each. We found that individuals with damage to the LTP were significantly impaired at conceptual knowledge retrieval when given names of famous people and landmarks (but this finding did not hold for melodies). This outcome supports the theory that the LTP is a bidirectional convergence region for proper naming, but suggests that melody retrieval may rely on processes different from those supported by the LTP.
Collapse
Affiliation(s)
- Brett Schneider
- Department of Neurology, University of Iowa Carver College of Medicine, USA
| | - Jonah Heskje
- Department of Neurology, University of Iowa Carver College of Medicine, USA
| | - Joel Bruss
- Department of Neurology, University of Iowa Carver College of Medicine, USA
| | - Daniel Tranel
- Department of Neurology, University of Iowa Carver College of Medicine, USA; Department of Psychological and Brain Sciences, University of Iowa, USA
| | - Amy M Belfi
- Department of Psychological Science, Missouri University of Science and Technology, USA.
| |
Collapse
|
46
|
Mühl C, Sheil O, Jarutytė L, Bestelmeyer PEG. The Bangor Voice Matching Test: A standardized test for the assessment of voice perception ability. Behav Res Methods 2018; 50:2184-2192. [PMID: 29124718 PMCID: PMC6267520 DOI: 10.3758/s13428-017-0985-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia). Face perception tests have been readily available for years, advancing our understanding of underlying mechanisms in face perception. In contrast, voice perception has received less attention, and the construction of standardized voice perception tests has been neglected. Here we report the construction of the first standardized test for voice perception ability. Participants make a same/different identity decision after hearing two voice samples. Item Response Theory guided item selection to ensure the test discriminates between a range of abilities. The test provides a starting point for the systematic exploration of the cognitive and neural mechanisms underlying voice perception. With a high test-retest reliability (r=.86) and short assessment duration (~10 min) this test examines individual abilities reliably and quickly and therefore also has potential for use in developmental and neuropsychological populations.
Collapse
Affiliation(s)
- Constanze Mühl
- School of Psychology, Bangor University, Brigantia Building, Penrallt Road, Bangor, Gwynedd, LL57 2AS, UK
| | - Orla Sheil
- School of Psychology, Bangor University, Brigantia Building, Penrallt Road, Bangor, Gwynedd, LL57 2AS, UK
| | - Lina Jarutytė
- School of Experimental Psychology, University of Bristol, Bristol, BS8 1TU, UK
| | - Patricia E G Bestelmeyer
- School of Psychology, Bangor University, Brigantia Building, Penrallt Road, Bangor, Gwynedd, LL57 2AS, UK.
| |
Collapse
|
47
|
Mühl C, Bestelmeyer PEG. Assessing susceptibility to distraction along the vocal processing hierarchy. Q J Exp Psychol (Hove) 2018; 72:1657-1666. [DOI: 10.1177/1747021818807183] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recent models of voice perception propose a hierarchy of steps leading from a more general, “low-level” acoustic analysis of the voice signal to a voice-specific, “higher-level” analysis. We aimed to engage two of these stages: first, a more general detection task in which voices had to be identified amid environmental sounds, and, second, a more voice-specific task requiring a same/different decision about unfamiliar speaker pairs (Bangor Voice Matching Test [BVMT]). We explored how vulnerable voice recognition is to interfering distractor voices, and whether performance on the aforementioned tasks could predict resistance against such interference. In addition, we manipulated the similarity of distractor voices to explore the impact of distractor similarity on recognition accuracy. We found moderate correlations between voice detection ability and resistance to distraction ( r = .44), and BVMT and resistance to distraction ( r = .57). A hierarchical regression revealed both tasks as significant predictors of the ability to tolerate distractors ( R2 = .36). The first stage of the regression (BVMT as sole predictor) already explained 32% of the variance. Descriptively, the “higher-level” BVMT was a better predictor (β = .47) than the more general detection task (β = .25), although further analysis revealed no significant difference between both beta weights. Furthermore, distractor similarity did not affect performance on the distractor task. Overall, our findings suggest the possibility to target specific stages of the voice perception process. This could help explore different stages of voice perception and their contributions to specific auditory abilities, possibly also in forensic and clinical settings.
Collapse
|
48
|
Davies-Thompson J, Elli GV, Rezk M, Benetti S, van Ackeren M, Collignon O. Hierarchical Brain Network for Face and Voice Integration of Emotion Expression. Cereb Cortex 2018; 29:3590-3605. [DOI: 10.1093/cercor/bhy240] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 08/29/2018] [Indexed: 12/22/2022] Open
Abstract
Abstract
The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face–voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest, extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face-selective region that also responded significantly to voices. Dynamic causal modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area, and voice-selective temporal voice area, with emotional expression affecting the connection strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.
Collapse
Affiliation(s)
- Jodie Davies-Thompson
- Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Mattarello 38123 - TN, via delle Regole, Italy
- Face Research, Swansea (FaReS), Department of Psychology, College of Human and Health Sciences, Swansea University, Singleton Park, Swansea, UK
| | - Giulia V Elli
- Department of Psychological & Brain Sciences, John Hopkins University, Baltimore, MD, USA
| | - Mohamed Rezk
- Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Mattarello 38123 - TN, via delle Regole, Italy
- Institute of research in Psychology (IPSY), Institute of Neuroscience (IoNS), 10 Place du Cardinal Mercier, 1348 Louvain-La-Neuve, University of Louvain (UcL), Belgium
| | - Stefania Benetti
- Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Mattarello 38123 - TN, via delle Regole, Italy
| | - Markus van Ackeren
- Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Mattarello 38123 - TN, via delle Regole, Italy
| | - Olivier Collignon
- Crossmodal Perception and Plasticity Laboratory, Center of Mind/Brain Sciences, University of Trento, Mattarello 38123 - TN, via delle Regole, Italy
- Institute of research in Psychology (IPSY), Institute of Neuroscience (IoNS), 10 Place du Cardinal Mercier, 1348 Louvain-La-Neuve, University of Louvain (UcL), Belgium
| |
Collapse
|
49
|
Aglieri V, Chaminade T, Takerkart S, Belin P. Functional connectivity within the voice perception network and its behavioural relevance. Neuroimage 2018; 183:356-365. [PMID: 30099078 PMCID: PMC6215333 DOI: 10.1016/j.neuroimage.2018.08.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 07/13/2018] [Accepted: 08/08/2018] [Indexed: 12/13/2022] Open
Abstract
Recognizing who is speaking is a cognitive ability characterized by considerable individual differences, which could relate to the inter-individual variability observed in voice-elicited BOLD activity. Since voice perception is sustained by a complex brain network involving temporal voice areas (TVAs) and, even if less consistently, extra-temporal regions such as frontal cortices, functional connectivity (FC) during an fMRI voice localizer (passive listening of voices vs non-voices) has been computed within twelve temporal and frontal voice-sensitive regions (“voice patches”) individually defined for each subject (N = 90) to account for inter-individual variability. Results revealed that voice patches were positively co-activated during voice listening and that they were characterized by different FC pattern depending on the location (anterior/posterior) and the hemisphere. Importantly, FC between right frontal and temporal voice patches was behaviorally relevant: FC significantly increased with voice recognition abilities as measured in a voice recognition test performed outside the scanner. Hence, this study highlights the importance of frontal regions in voice perception and it supports the idea that looking at FC between stimulus-specific and higher-order frontal regions can help understanding individual differences in processing social stimuli such as voices.
Collapse
Affiliation(s)
- Virginia Aglieri
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France.
| | - Thierry Chaminade
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France; Institute of Language, Communication and the Brain, Marseille, France
| | - Sylvain Takerkart
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France; Institute of Language, Communication and the Brain, Marseille, France
| | - Pascal Belin
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France; Institute of Language, Communication and the Brain, Marseille, France; International Laboratories for Brain, Music and Sound, Department of Psychology, Université de Montréal, McGill University, Montreal, QC, Canada
| |
Collapse
|
50
|
Gainotti G. How can familiar voice recognition be intact if unfamiliar voice discrimination is impaired? An introduction to this special section on familiar voice recognition. Neuropsychologia 2018; 116:151-153. [PMID: 29627274 DOI: 10.1016/j.neuropsychologia.2018.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Guido Gainotti
- Institute of Neurology of the Policlinico Gemelli/ Catholic University of Rome, Italy; IRCCS Fondazione Santa Lucia, Department of Clinical and Behavioral Neurology, Rome, Italy.
| |
Collapse
|