151
|
Predicting variations of perceptual performance across individuals from neural activity using pattern classifiers. Neuroimage 2010; 51:1425-37. [PMID: 20302949 DOI: 10.1016/j.neuroimage.2010.03.030] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2009] [Revised: 03/02/2010] [Accepted: 03/09/2010] [Indexed: 11/22/2022] Open
Abstract
Within the past decade computational approaches adopted from the field of machine learning have provided neuroscientists with powerful new tools for analyzing neural data. For instance, previous studies have applied pattern classification algorithms to electroencephalography data to predict the category of presented visual stimuli, human observer decision choices and task difficulty. Here, we quantitatively compare the ability of pattern classifiers and three ERP metrics (peak amplitude, mean amplitude, and onset latency of the face-selective N170) to predict variations across individuals' behavioral performance in a difficult perceptual task identifying images of faces and cars embedded in noise. We investigate three different pattern classifiers (Classwise Principal Component Analysis, CPCA; Linear Discriminant Analysis, LDA; and Support Vector Machine, SVM), five training methods differing in the selection of training data sets and three analyses procedures for the ERP measures. We show that all three pattern classifier algorithms surpass traditional ERP measurements in their ability to predict individual differences in performance. Although the differences across pattern classifiers were not large, the CPCA method with training data sets restricted to EEG activity for trials in which observers expressed high confidence about their decisions performed the highest at predicting perceptual performance of observers. We also show that the neural activity predicting the performance across individuals was distributed through time starting at 120ms, and unlike the face-selective ERP response, sustained for more than 400ms after stimulus presentation, indicating that both early and late components contain information correlated with observers' behavioral performance. Together, our results further demonstrate the potential of pattern classifiers compared to more traditional ERP techniques as an analysis tool for modeling spatiotemporal dynamics of the human brain and relating neural activity to behavior.
Collapse
|
152
|
Taylor JC, Wiggett AJ, Downing PE. fMRI–Adaptation Studies of Viewpoint Tuning in the Extrastriate and Fusiform Body Areas. J Neurophysiol 2010; 103:1467-77. [DOI: 10.1152/jn.00637.2009] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
People are easily able to perceive the human body across different viewpoints, but the neural mechanisms underpinning this ability are currently unclear. In three experiments, we used functional MRI (fMRI) adaptation to study the view-invariance of representations in two cortical regions that have previously been shown to be sensitive to visual depictions of the human body—the extrastriate and fusiform body areas (EBA and FBA). The BOLD response to sequentially presented pairs of bodies was treated as an index of view invariance. Specifically, we compared trials in which the bodies in each image held identical poses (seen from different views) to trials containing different poses. EBA and FBA adapted to identical views of the same pose, and both showed a progressive rebound from adaptation as a function of the angular difference between views, up to ∼30°. However, these adaptation effects were eliminated when the body stimuli were followed by a pattern mask. Delaying the mask onset increased the response (but not the adaptation effect) in EBA, leaving FBA unaffected. We interpret these masking effects as evidence that view-dependent fMRI adaptation is driven by later waves of neuronal responses in the regions of interest. Finally, in a whole brain analysis, we identified an anterior region of the left inferior temporal sulcus (l-aITS) that responded linearly to stimulus rotation, but showed no selectivity for bodies. Our results show that body-selective cortical areas exhibit a similar degree of view-invariance as other object selective areas—such as the lateral occipitotemporal area (LO) and posterior fusiform gyrus (pFs).
Collapse
Affiliation(s)
- John C. Taylor
- Wales Institute of Cognitive Neuroscience, School of Psychology, Bangor University, Bangor, United Kingdom
| | - Alison J. Wiggett
- Wales Institute of Cognitive Neuroscience, School of Psychology, Bangor University, Bangor, United Kingdom
| | - Paul E. Downing
- Wales Institute of Cognitive Neuroscience, School of Psychology, Bangor University, Bangor, United Kingdom
| |
Collapse
|
153
|
|
154
|
Jacques C, Rossion B. Misaligning face halves increases and delays the N170 specifically for upright faces: Implications for the nature of early face representations. Brain Res 2010; 1318:96-109. [DOI: 10.1016/j.brainres.2009.12.070] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2009] [Revised: 12/22/2009] [Accepted: 12/23/2009] [Indexed: 10/20/2022]
|
155
|
Eimer M, Kiss M, Nicholas S. Response Profile of the Face-Sensitive N170 Component: A Rapid Adaptation Study. Cereb Cortex 2010; 20:2442-52. [DOI: 10.1093/cercor/bhp312] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
156
|
Rosburg T, Ludowig E, Dümpelmann M, Alba-Ferrara L, Urbach H, Elger CE. The effect of face inversion on intracranial and scalp recordings of event-related potentials. Psychophysiology 2010; 47:147-57. [DOI: 10.1111/j.1469-8986.2009.00881.x] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
157
|
Abstract
Fisch et al. report in this issue of Neuron the results of an investigation of the neural correlates of conscious perception. They find an early, dramatic, and long-lasting gamma response in high-level visual areas, when (and only when) a rapidly presented image is perceived.
Collapse
|
158
|
de Gelder B, Van den Stock J, Meeren HKM, Sinke CBA, Kret ME, Tamietto M. Standing up for the body. Recent progress in uncovering the networks involved in the perception of bodies and bodily expressions. Neurosci Biobehav Rev 2009; 34:513-27. [PMID: 19857515 DOI: 10.1016/j.neubiorev.2009.10.008] [Citation(s) in RCA: 213] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2009] [Revised: 10/12/2009] [Accepted: 10/18/2009] [Indexed: 01/10/2023]
Abstract
Recent studies of monkeys and humans have identified several brain regions that respond to bodies. Researchers have so far mainly addressed the same questions about bodies and bodily expressions that are already familiar from three decades of face and facial expression studies. Our present goal is to review behavioral, electrophysiological and neurofunctional studies on whole body and bodily expression perception against the background of what is known about face perception. We review all currently available evidence in more detail than done so far, but we also argue for a more theoretically motivated comparison of faces and bodies that reflects some broader concerns than only modularity or category specificity of faces or bodies.
Collapse
Affiliation(s)
- Beatrice de Gelder
- Cognitive and Affective Neuroscience Laboratory, Tilburg University, P.O. Box 90153, 5000LE, Tilburg, The Netherlands.
| | | | | | | | | | | |
Collapse
|
159
|
Charest I, Pernet CR, Rousselet GA, Quiñones I, Latinus M, Fillion-Bilodeau S, Chartrand JP, Belin P. Electrophysiological evidence for an early processing of human voices. BMC Neurosci 2009; 10:127. [PMID: 19843323 PMCID: PMC2770575 DOI: 10.1186/1471-2202-10-127] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2009] [Accepted: 10/20/2009] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed. RESULTS ERPs were recorded from 32 healthy volunteers who listened to 200 ms long stimuli from three sound categories - voices, bird songs and environmental sounds - whilst performing a pure-tone detection task. ERP analyses revealed voice/non-voice amplitude differences emerging as early as 164 ms post stimulus onset and peaking around 200 ms on fronto-temporal (positivity) and occipital (negativity) electrodes. CONCLUSION Our electrophysiological results suggest a rapid brain discrimination of sounds of voice, termed the "fronto-temporal positivity to voices" (FTPV), at latencies comparable to the well-known face-preferential N170.
Collapse
Affiliation(s)
- Ian Charest
- Centre for Cognitive NeuroImaging (CCNi) & Department of Psychology, University of Glasgow, Glasgow, UK.
| | | | | | | | | | | | | | | |
Collapse
|
160
|
Zhang Y, Qiu J, Huang H, Zhang Q, Bao B. Chinese character recognition in mirror reading: evidence from event-related potential. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2009; 44:360-8. [PMID: 22029614 DOI: 10.1080/00207590802500190] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
As is well known, mirror reading in language requires recognition of words and letters in mirror-reversed pattern compared with normal reading, and the cognitive mechanism underlying the mirror reading may involve two critical processes: visuospatial transformation and linguistic regulation. Chinese characters, different from English, are characterized by some unique features in orthography and spelling. Using ERP techniques, the present study investigated neural correlates underlying the mirror reading of Chinese characters, and whether the cognitive processes underlying the recognition of mirrored Chinese characters is different from those of alphabetic words. Twelve native Chinese speakers participated in the experiment, during which they were instructed to make an animal/nonanimal distinction. The stimuli varied with the word category (animal vs nonanimal) and presentation format (normal vs mirror-reversed). The data analyses focused on three aspects: the reaction times (RT) for Chinese words of normal and mirror-reversed formats, peak latencies, and peak amplitudes of ERP components elicited by mirror-reversed and normal Chinese words. The results from implicit reading provide evidence for a mirror-reversed effect. The behavioural data showed that mirror-reversed words were more difficult to identify than normal words, with RTs delayed for mirror-reversed words over normal words. Moreover, a clear N2 component, with maximal activity occurring at 200-250ms interval (N2), was more negative for mirror-reversed words than for normal words at posterior regions. However, there were no latency differences between normal and mirror-reversed words. The occipital N2 might be closely related to abstract word form representation. Larger N2 amplitude in response to mirror-reversed Chinese words is interpreted as reflecting visuospatial transformation in order to compensate for impaired word form analysis. The result of no N2 latency delay indicated that word form analysis and visuospatial transformation might be processed in parallel.
Collapse
Affiliation(s)
- Ye Zhang
- Southwest University, Chongqing, China
| | | | | | | | | |
Collapse
|
161
|
Taylor MJ, Arsalidou M, Bayless SJ, Morris D, Evans JW, Barbeau EJ. Neural correlates of personally familiar faces: parents, partner and own faces. Hum Brain Mapp 2009; 30:2008-20. [PMID: 18726910 DOI: 10.1002/hbm.20646] [Citation(s) in RCA: 89] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Investigations of the neural correlates of face recognition have typically used old/new paradigms where subjects learn to recognize new faces or identify famous faces. Familiar faces, however, include one's own face, partner's and parents' faces. Using event-related fMRI, we examined the neural correlates of these personally familiar faces. Ten participants were presented with photographs of own, partner, parents, famous and unfamiliar faces and responded to a distinct target. Whole brain, two regions of interest (fusiform gyrus and cingulate gyrus), and multiple linear regression analyses were conducted. Compared with baseline, all familiar faces activated the fusiform gyrus; own faces also activated occipital regions and the precuneus; partner faces activated similar areas, but in addition, the parahippocampal gyrus, middle superior temporal gyri and middle frontal gyrus. Compared with unfamiliar faces, only personally familiar faces activated the cingulate gyrus and the extent of activation varied with face category. Partner faces also activated the insula, amygdala and thalamus. Regions of interest analyses and laterality indices showed anatomical distinctions of processing the personally familiar faces within the fusiform and cingulate gyri. Famous faces were right lateralized whereas personally familiar faces, particularly partner and own faces, elicited bilateral activations. Regression analyses show experiential predictors modulated with neural activity related to own and partner faces. Thus, personally familiar faces activated the core visual areas and extended frontal regions, related to semantic and person knowledge and the extent and areas of activation varied with face type.
Collapse
Affiliation(s)
- Margot J Taylor
- Department of Diagnostic Imaging and Research Institute, Hospital for Sick Children, Toronto, Canada.
| | | | | | | | | | | |
Collapse
|
162
|
Wilkinson D, Ko P, Wiriadjaja A, Kilduff P, McGlinchey R, Milberg W. Unilateral damage to the right cerebral hemisphere disrupts the apprehension of whole faces and their component parts. Neuropsychologia 2009; 47:1701-11. [DOI: 10.1016/j.neuropsychologia.2009.02.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2008] [Revised: 01/13/2009] [Accepted: 02/04/2009] [Indexed: 11/25/2022]
|
163
|
Itier RJ, Batty M. Neural bases of eye and gaze processing: the core of social cognition. Neurosci Biobehav Rev 2009; 33:843-63. [PMID: 19428496 PMCID: PMC3925117 DOI: 10.1016/j.neubiorev.2009.02.004] [Citation(s) in RCA: 363] [Impact Index Per Article: 24.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2008] [Revised: 01/20/2009] [Accepted: 02/12/2009] [Indexed: 10/21/2022]
Abstract
Eyes and gaze are very important stimuli for human social interactions. Recent studies suggest that impairments in recognizing face identity, facial emotions or in inferring attention and intentions of others could be linked to difficulties in extracting the relevant information from the eye region including gaze direction. In this review, we address the central role of eyes and gaze in social cognition. We start with behavioral data demonstrating the importance of the eye region and the impact of gaze on the most significant aspects of face processing. We review neuropsychological cases and data from various imaging techniques such as fMRI/PET and ERP/MEG, in an attempt to best describe the spatio-temporal networks underlying these processes. The existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism. We suggest impairments in processing eyes and gaze may represent a core deficiency in several other brain pathologies and may be central to abnormal social cognition.
Collapse
Affiliation(s)
- Roxane J Itier
- Psychology Department, University of Waterloo, Ontario, Canada.
| | | |
Collapse
|
164
|
Timing, timing, timing: fast decoding of object information from intracranial field potentials in human visual cortex. Neuron 2009; 62:281-90. [PMID: 19409272 DOI: 10.1016/j.neuron.2009.02.025] [Citation(s) in RCA: 259] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2008] [Revised: 02/09/2009] [Accepted: 02/24/2009] [Indexed: 12/20/2022]
Abstract
The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here, we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms poststimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feedforward theories and provides strong constraints for computational models of human vision.
Collapse
|
165
|
Phased processing of facial emotion: An ERP study. Neurosci Res 2009; 64:30-40. [DOI: 10.1016/j.neures.2009.01.009] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2008] [Revised: 01/15/2009] [Accepted: 01/16/2009] [Indexed: 11/23/2022]
|
166
|
Tanaka E, Inui K, Kida T, Kakigi R. Common cortical responses evoked by appearance, disappearance and change of the human face. BMC Neurosci 2009; 10:38. [PMID: 19389259 PMCID: PMC2680404 DOI: 10.1186/1471-2202-10-38] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2008] [Accepted: 04/24/2009] [Indexed: 11/18/2022] Open
Abstract
Background To segregate luminance-related, face-related and non-specific components involved in spatio-temporal dynamics of cortical activations to a face stimulus, we recorded cortical responses to face appearance (Onset), disappearance (Offset), and change (Change) using magnetoencephalography. Results Activity in and around the primary visual cortex (V1/V2) showed luminance-dependent behavior. Any of the three events evoked activity in the middle occipital gyrus (MOG) at 150 ms and temporo-parietal junction (TPJ) at 250 ms after the onset of each event. Onset and Change activated the fusiform gyrus (FG), while Offset did not. This FG activation showed a triphasic waveform, consistent with results of intracranial recordings in humans. Conclusion Analysis employed in this study successfully segregated four different elements involved in the spatio-temporal dynamics of cortical activations in response to a face stimulus. The results show the responses of MOG and TPJ to be associated with non-specific processes, such as the detection of abrupt changes or exogenous attention. Activity in FG corresponds to a face-specific response recorded by intracranial studies, and that in V1/V2 is related to a change in luminance.
Collapse
Affiliation(s)
- Emi Tanaka
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki, Japan.
| | | | | | | |
Collapse
|
167
|
Turella L, Erb M, Grodd W, Castiello U. Visual features of an observed agent do not modulate human brain activity during action observation. Neuroimage 2009; 46:844-53. [PMID: 19285143 DOI: 10.1016/j.neuroimage.2009.03.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2008] [Revised: 02/25/2009] [Accepted: 03/01/2009] [Indexed: 10/21/2022] Open
Abstract
Recent neuroimaging evidence in macaques has shown that the neural system underlying the observation of hand actions performed by others (i.e., "action observation system") is modulated by whether the observed action is performed by a person in full view or an isolated hand (i.e., type of view manipulation). Although a human homologue of such circuit has been identified, whether in humans the neural processes involved in this capacity are modulated by the type of view remains unknown. Here we used functional magnetic resonance imaging (fMRI) to investigate whether the "action observation system", with specific reference to the ventral premotor cortex, responds differentially depending on type of view. We also tested this manipulation within regions of the human brain showing overlapping activity for both the observation and the execution of action ("mirror" regions). To this end, the same subjects were requested to observe grasping actions performed under the two types of view (observation conditions) or to perform a grasping action (execution condition). Results from whole-brain analyses indicate that overlapping activity for action observation and execution was evident in a broad network of areas including parietal, premotor and temporal cortices. Activity within such network was evident for both the observation of a person in full view or an isolated hand, but it was not modulated by the type of view. Similarly, results from region of interest (ROI) analyses, performed within the ventral premotor cortex, did confirm that this area responded in a similar fashion following the observation of either an isolated hand or an entire model acting. These findings offer novel insights on what the "action observation" and the "mirror" systems visually code and how the processing underlying such coding may vary across species. Further, they support the hypothesis that action goal is amongst the main determinants for the revelation of action observation activity, and to the existence of a broad system involved in the simulation of action.
Collapse
Affiliation(s)
- Luca Turella
- Section on Experimental MR of the CNS, Department of Neuroradiology, University of Tuebingen, Germany
| | | | | | | |
Collapse
|
168
|
Pictorial cognitive task resolution and dynamics of event-related potentials. Biomed Pap Med Fac Univ Palacky Olomouc Czech Repub 2009; 152:223-30. [PMID: 19219211 DOI: 10.5507/bp.2008.034] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
AIMS To judge whether and how the character of the visual stimulus and type of cognitive task affects brain event-related potentials (ERPs). METHODS ERPs to three types of visual stimuli (white blank oval on a dark background, unfolded cube and net of sixteen small squares) were recorded from nine scalp sites and saved on a computer. Special software was used for off-line analysis of the ERPs. RESULTS The presentation of each of the three visual stimuli used was followed by ERPs consisting of two negative (N160, N340) and one positive (P220) components. The character of the stimulus did not affect the latency of ERPs components. However, the type of visual stimuli affected the amplitude. The most conspicuous changes were shown by the N340 ERPs component. Its average amplitude in comparison with reference amplitude was always significantly higher during the first cognitive task ("Choose the cube that can be folded up from the unfolded cube!") and significantly lower than reference amplitude during the second cognitive task ("Complete the missing part of a figure with the appropriate item!"). It was also shown that subjective personality traits such as nervousness, spontaneous aggressivity and emotional lability had an influence on the recovery phase of the experiment affecting the average amplitude of N340 CONCLUSION The results revealed that the cognitive processes underlying successful resolution of two pictorial cognitive tasks affected differently the activity of systems giving rise to visual ERPs.
Collapse
|
169
|
Gunji A, Inagaki M, Inoue Y, Takeshima Y, Kaga M. Event-related potentials of self-face recognition in children with pervasive developmental disorders. Brain Dev 2009; 31:139-47. [PMID: 18590948 DOI: 10.1016/j.braindev.2008.04.011] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2007] [Revised: 03/14/2008] [Accepted: 04/24/2008] [Indexed: 10/21/2022]
Abstract
Patients with pervasive developmental disorders (PDD) often have difficulty reading facial expressions and deciphering their implied meaning. We focused on semantic encoding related to face cognition to investigate event-related potentials (ERPs) to the subject's own face and familiar faces in children with and without PDD. Eight children with PDD (seven boys and one girl; aged 10.8+/-2.9 years; one left-handed) and nine age-matched typically developing children (four boys and five girls; aged 11.3+/-2.3 years; one left-handed) participated in this study. The stimuli consisted of three face images (self, familiar, and unfamiliar faces), one scrambled face image, and one object image (e.g., cup) with gray scale. We confirmed three major components: N170 and early posterior negativity (EPN) in the occipito-temporal regions (T5 and T6) and P300 in the parietal region (Pz). An enhanced N170 was observed as a face-specific response in all subjects. However, semantic encoding of each face might be unrelated to N170 because the amplitude and latency were not significantly different among the face conditions. On the other hand, an additional component after N170, EPN which was calculated in each subtracted waveform (self vs. familiar and familiar vs. unfamiliar), indicated self-awareness and familiarity with respect to face cognition in the control adults and children. Furthermore, the P300 amplitude in the control adults was significantly greater in the self-face condition than in the familiar-face condition. However, no significant differences in the EPN and P300 components were observed among the self-, familiar-, and unfamiliar-face conditions in the PDD children. The results suggest a deficit of semantic encoding of faces in children with PDD, which may be implicated in their delay in social communication.
Collapse
Affiliation(s)
- Atsuko Gunji
- Department of Developmental Disorders, National Institute of Mental Health, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo 187-8553, Japan.
| | | | | | | | | |
Collapse
|
170
|
Commonalities in the neural mechanisms underlying automatic attentional shifts by gaze, gestures, and symbols. Neuroimage 2009; 45:984-92. [PMID: 19167506 DOI: 10.1016/j.neuroimage.2008.12.052] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2008] [Revised: 11/27/2008] [Accepted: 12/29/2008] [Indexed: 11/23/2022] Open
Abstract
Eye gaze, hand-pointing gestures, and arrows automatically trigger attentional shifts. Although it has been suggested that common neural mechanisms underlie these three types of attentional shifts, this issue remains unsettled. We measured brain activity using fMRI while participants observed directional and non-directional stimuli, including eyes, hands, and arrows, to investigate this issue. Conjunction analyses revealed that the posterior superior temporal sulcus (STS), the inferior parietal lobule, the inferior frontal gyrus, and the occipital cortices in the right hemisphere were more active in common in response to directional versus non-directional stimuli. These results suggest commonalities in the neurocognitive mechanisms underlying the automatic attentional shifts triggered by gaze, gestures, and symbols.
Collapse
|
171
|
Yang Y, Guo H, Tong S, Zhu Y, Qiu Y. Neurophysiology study of early visual processing of face and non-face recognition under simulated prosthetic vision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2009:3952-3955. [PMID: 19964326 DOI: 10.1109/iembs.2009.5333672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Behavioral researches have shown that the visual function can be partly restored by phosphene-based prosthetic vision for the non-congenital blinds. However, the early visual processing mechanisms of phosphene object recognition is still unclear. This paper aimed to investigate the electro-neurophysiology underlying the phosphene face and non-face recognition. The modulations of latency and amplitude of N170 component in the event-related potential (ERP) were analyzed. Our preliminary results showed that (1) both normal and phosphene face stimuli could elicit prominent N170; nevertheless, phosphene stimuli caused notable latency delay and amplitude suppression on N170 compared with normal stimuli and (2) under phosphene non-face stimuli, a slight but significant latency delay occurred compared with normal stimuli, while amplitude suppression was not observed. Therefore, it was suggested that (1) phosphene perception caused a disruption of the early visual processing for non-canonical images of objects, which was more profound in phosphene face processing; (2) the face-specific processing was reserved under prosthetic vision and (3) holistic processing was the major stage in early visual processing of phosphene face recognition, while part-based processing was attenuated due to the loss of the details.
Collapse
Affiliation(s)
- Yuan Yang
- Department of Biomedical Engineering, Shanghai Jiao Tong University, China
| | | | | | | | | |
Collapse
|
172
|
Tsuchiya N, Kawasaki H, Oya H, Howard MA, Adolphs R. Decoding face information in time, frequency and space from direct intracranial recordings of the human brain. PLoS One 2008; 3:e3892. [PMID: 19065268 PMCID: PMC2588533 DOI: 10.1371/journal.pone.0003892] [Citation(s) in RCA: 88] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2008] [Accepted: 11/06/2008] [Indexed: 11/30/2022] Open
Abstract
Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.
Collapse
Affiliation(s)
- Naotsugu Tsuchiya
- Division of Humanities and Social Sciences, Caltech, Pasadena, California, United States of America.
| | | | | | | | | |
Collapse
|
173
|
Abstract
Covert exchange of autonomic responses may shape social affective behavior, as observed in mirroring of pupillary responses during sadness processing. We examined how, independent of facial emotional expression, dynamic coherence between one's own and another's pupil size modulates regional brain activity. Fourteen subjects viewed pairs of eye stimuli while undergoing fMRI. Using continuous pupillometry biofeedback, the size of the observed pupils was varied, correlating positively or negatively with changes in participants’ own pupils. Viewing both static and dynamic stimuli activated right fusiform gyrus. Observing dynamically changing pupils activated STS and amygdala, regions engaged by non-static and salient facial features. Discordance between observed and observer's pupillary changes enhanced activity within bilateral anterior insula, left amygdala and anterior cingulate. In contrast, processing positively correlated pupils enhanced activity within left frontal operculum. Our findings suggest pupillary signals are monitored continuously during social interactions and that incongruent changes activate brain regions involved in tracking motivational salience and attentionally meaningful information. Naturalistically, dynamic coherence in pupillary change follows fluctuations in ambient light. Correspondingly, in social contexts discordant pupil response is likely to reflect divergence of dispositional state. Our data provide empirical evidence for an autonomically mediated extension of forward models of motor control into social interaction.
Collapse
|
174
|
Bourne VJ, Vladeanu M, Hole GJ. Lateralised repetition priming for featurally and configurally manipulated familiar faces: evidence for differentially lateralised processing mechanisms. Laterality 2008; 14:287-99. [PMID: 18949655 DOI: 10.1080/13576500802383709] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Although early research suggested that the right hemisphere was dominant for processing faces, more recent studies have provided evidence for both hemispheres being involved, at least to some extent. In this experiment we examined hemispheric specialisations by using a lateralised repetition-priming paradigm with selectively degraded faces. Configurally degraded prime faces produced negative priming when presented to the left visual field (right hemisphere) and positive priming (facilitation) when presented to the right visual field (left hemisphere). Featurally degraded prime faces produced the opposite pattern of effects: positive priming when presented to the left visual field (right hemisphere) and negative priming when presented to the right visual field (left hemisphere). These results support the proposal that each hemisphere is differentially specialised for processing distinct forms of facial information: the right hemisphere for configural information and the left hemisphere for featural information.
Collapse
|
175
|
Williams LM. Voxel-based morphometry in schizophrenia: implications for neurodevelopmental connectivity models, cognition and affect. Expert Rev Neurother 2008; 8:1049-65. [PMID: 18590476 DOI: 10.1586/14737175.8.7.1049] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Voxel-based morphometry (VBM) studies have provided valuable data on the nature and distribution of gray and white matter abnormalities in schizophrenia relative to the whole brain. Most VBM studies have focused on chronic patients, but there are accumulating studies of first-episode schizophrenia and other high-risk groups such as first-degree relatives. This review outlines the evidence from VBM studies of both chronic and first-episode/high-risk groups. The most consistent reduction revealed in chronic patients is in the superior temporal cortex, and in first-episode/high-risk individuals, in frontal brain regions. These findings are reviewed in relation to complementary evidence for neurodevelopmental deviation, and functional associations with both neuroimaging and behavioral measures of general and social cognition.
Collapse
Affiliation(s)
- Leanne M Williams
- Brain Dynamics Centre, Westmead Millennium Institute & Western Clinical School, University of Sydney, Westmead Hospital, NSW 2145, Australia.
| |
Collapse
|
176
|
Ricciardelli P, Driver J. Effects of head orientation on gaze perception: how positive congruency effects can be reversed. Q J Exp Psychol (Hove) 2008; 61:491-504. [PMID: 17853198 DOI: 10.1080/17470210701255457] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Several past studies have considered how perceived head orientation may be combined with perceived gaze direction in judging where someone else is attending. In three experiments we tested the impact of different sources of information by examining the role of head orientation in gaze-direction judgements when presenting: (a) the whole face; (b) the face with the nose masked; (c) just the eye region, removing all other head-orientation cues apart from some visible part of the nose; or (d) just the eyes, with all parts of the nose masked and no head orientation cues present other than those within the eyes themselves. We also varied time pressure on gaze direction judgements. The results showed that gaze judgements were not solely driven by the eye region. Gaze perception can also be affected by parts of the head and face, but in a manner that depends on the time constraints for gaze direction judgements. While "positive" congruency effects were found with time pressure (i.e., faster left/right judgements of seen gaze when the seen head deviated towards the same side as that gaze), the opposite applied without time pressure.
Collapse
|
177
|
ERP study of viewpoint-independence in familiar-face recognition. Int J Psychophysiol 2008; 69:119-26. [DOI: 10.1016/j.ijpsycho.2008.03.009] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2007] [Revised: 03/14/2008] [Accepted: 03/16/2008] [Indexed: 11/20/2022]
|
178
|
Haas BW, Constable RT, Canli T. Functional magnetic resonance imaging of temporally distinct responses to emotional facial expressions. Soc Neurosci 2008; 4:121-34. [PMID: 18633831 DOI: 10.1080/17470910802176326] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Understanding the temporal dynamics of brain function contributes to models of learning and memory as well as the processing of emotions and habituation. In this article, we present a novel analysis technique to investigate spatiotemporal patterns of activation in response to blocked presentations of emotional stimuli. We modeled three temporal response functions (TRFs), which were maximally sensitive to the onset, early or sustained temporal component of a given block type. This analysis technique was applied to a data set of 29 subjects who underwent functional magnetic resonance imaging while responding to fearful, happy, and sad facial expressions. We identified brain regions that uniquely fit each of the three TRFs for each emotional condition and compared the results to the standard approach, which was based on the canonical hemodynamic response function. We found that voxels within the precuneus fit the onset TRF but did not fit the early or the sustained TRF in all the emotional conditions. On the other hand, voxels within the amygdala fit the sustained TRF, but not the onset or early TRF, during presentation of fearful stimuli, suggesting a spatiotemporal dissociation between these structures. This technique provides researchers with an additional tool in order to investigate the temporal dynamics of neural circuits.
Collapse
Affiliation(s)
- Brian W Haas
- Department of Psychiatry and Behavioral Sciences, Stanford University Medical School, 401 Quarry Road, Stanford, CA 94305, USA.
| | | | | |
Collapse
|
179
|
Riddoch MJ, Johnston RA, Bracewell RM, Boutsen L, Humphreys GW. Are faces special? A case of pure prosopagnosia. Cogn Neuropsychol 2008; 25:3-26. [PMID: 18340601 DOI: 10.1080/02643290801920113] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The ability to recognize individual faces is of crucial social importance for humans and evolutionarily necessary for survival. Consequently, faces may be "special" stimuli, for which we have developed unique modular perceptual and recognition processes. Some of the strongest evidence for face processing being modular comes from cases of prosopagnosia, where patients are unable to recognize faces whilst retaining the ability to recognize other objects. Here we present the case of an acquired prosopagnosic whose poor recognition was linked to a perceptual impairment in face processing. Despite this, she had intact object recognition, even at a subordinate level. She also showed a normal ability to learn and to generalize learning of nonfacial exemplars differing in the nature and arrangement of their parts, along with impaired learning and generalization of facial exemplars. The case provides evidence for modular perceptual processes for faces.
Collapse
Affiliation(s)
- M Jane Riddoch
- Behavioural Brain Sciences, School of Psychology, University of Birmingham, Birmingham, UK.
| | | | | | | | | |
Collapse
|
180
|
Harris A, Aguirre GK. The Representation of Parts and Wholes in Face-selective Cortex. J Cogn Neurosci 2008; 20:863-78. [DOI: 10.1162/jocn.2008.20509] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Although face perception is often characterized as depending on holistic, rather than part-based, processing, there is behavioral evidence for independent representations of face parts. Recent work has linked “face-selective” regions defined with functional magnetic resonance imaging (fMRI) to holistic processing, but the response of these areas to face parts remains unclear. Here we examine part-based versus holistic processing in “face-selective” visual areas using face stimuli manipulated in binocular disparity to appear either behind or in front of a set of stripes [Nakayama, K., Shimojo, S., & Silverman, G. H. Stereoscopic depth: Its relation to image segmentation, grouping, and the recognition of occluded objects. Perception, 18, 55–68, 1989]. While the first case will be “filled in” by the visual system and perceived holistically, we demonstrate behaviorally that the latter cannot be completed amodally, and thus is perceived as parts. Using these stimuli in fMRI, we found significant responses to both depth manipulations in inferior occipital gyrus and middle fusiform gyrus (MFG) “face-selective” regions, suggesting that neural populations in these areas encode both parts and wholes. In comparison, applying these depth manipulations to control stimuli (alphanumeric characters) elicited much smaller signal changes within face-selective regions, indicating that the part-based representation for faces is separate from that for objects. The combined adaptation data also showed an interaction of depth and familiarity within the right MFG, with greater adaptation in the back (holistic) condition relative to parts for familiar but not unfamiliar faces. Together, these data indicate that face-selective regions of occipito-temporal cortex engage in both part-based and holistic processing. The relative recruitment of such representations may be additionally influenced by external factors such as familiarity.
Collapse
|
181
|
Kouider S, Eger E, Dolan R, Henson RN. Activity in face-responsive brain regions is modulated by invisible, attended faces: evidence from masked priming. Cereb Cortex 2008; 19:13-23. [PMID: 18400791 PMCID: PMC2638745 DOI: 10.1093/cercor/bhn048] [Citation(s) in RCA: 74] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
It is often assumed that neural activity in face-responsive regions of primate cortex correlates with conscious perception of faces. However, whether such activity occurs without awareness is still debated. Using functional magnetic resonance imaging (fMRI) in conjunction with a novel masked face priming paradigm, we observed neural modulations that could not be attributed to perceptual awareness. More specifically, we found reduced activity in several classic face-processing regions, including the "fusiform face area," "occipital face area," and superior temporal sulcus, when a face was preceded by a briefly flashed image of the same face, relative to a different face, even when 2 images of the same face differed. Importantly, unlike most previous studies, which have minimized awareness by using conditions of inattention, the present results occurred when the stimuli (the primes) were attended. By contrast, when primes were perceived consciously, in a long-lag priming paradigm, we found repetition-related activity increases in additional frontal and parietal regions. These data not only demonstrate that fMRI activity in face-responsive regions can be modulated independently of perceptual awareness, but also document where such subliminal face-processing occurs (i.e., restricted to face-responsive regions of occipital and temporal cortex) and to what extent (i.e., independent of the specific image).
Collapse
Affiliation(s)
- Sid Kouider
- Laboratoire des Sciences Cognitives et Psycholinguistique, CNRS/EHESS/DEC-ENS, 75005 Paris, France.
| | | | | | | |
Collapse
|
182
|
Morris JP, Green SR, Marion B, McCarthy G. Guided saccades modulate face- and body-sensitive activation in the occipitotemporal cortex during social perception. Brain Cogn 2008; 67:254-63. [PMID: 18346831 DOI: 10.1016/j.bandc.2008.01.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2007] [Accepted: 01/28/2008] [Indexed: 11/20/2022]
Abstract
Functional magnetic resonance imaging (fMRI) has identified distinct brain regions in ventral occipitotemporal cortex (VOTC) and lateral occipitotemporal cortex (LOTC) that are differentially activated by pictures of faces and bodies. Recent work from our laboratory has shown that the strong LOTC activation evoked by bodies in which the face is occluded is attenuated when the occlusion is removed. We hypothesized that this attenuation may occur because subjects preferentially fixate upon faces when present in the scene. Here, we experimentally manipulated subjects' fixations while they viewed a static picture of a character whose face, hand, and torso were continuously visible throughout each run. The subject's saccades and fixations were guided by a small fixation cross that made discrete jumps to a new location every 500ms. Subjects were instructed to follow the fixation cross and make a button press whenever it changed size. In a series of blocks, the fixation cross shifted from locations on the face, on the hand, and to locations on a background image of a phase-scrambled face. In a second study, the fixation cross moved similarly, but the hand locations were changed to locations along the character's body or torso. A localizer task was used to identify face- and body-sensitive regions of LOTC. Body-sensitive regions were strongly activated when the subjects' saccades were guided over the character's torso relative to when the saccades were guided over the character's face. Little to no activity occurred in the body-sensitive region of LOTC when the subjects' saccades were guided over the character's hand. The localizer task was unable to differentiate body-sensitive regions in lateral VOTC from face-sensitive regions, or body-sensitive regions in medial VOTC from flower-sensitive regions. Guided saccades over the body strongly activated both lateral and medial VOTC. These results provide new insights into the function of body-sensitive visual areas in both LOTC and VOTC, and illustrate the potential confounding influence of uncontrolled eye movements for neuroimaging studies of social perception.
Collapse
Affiliation(s)
- James P Morris
- Duke-UNC Brain Imaging and Analysis Center, Duke University, Durham, NC, USA
| | | | | | | |
Collapse
|
183
|
Holmes A, Nielsen MK, Green S. Effects of anxiety on the processing of fearful and happy faces: An event-related potential study. Biol Psychol 2008; 77:159-73. [DOI: 10.1016/j.biopsycho.2007.10.003] [Citation(s) in RCA: 125] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2007] [Revised: 10/04/2007] [Accepted: 10/05/2007] [Indexed: 11/30/2022]
|
184
|
Itier RJ, Alain C, Sedore K, McIntosh AR. Early face processing specificity: it's in the eyes! J Cogn Neurosci 2008; 19:1815-26. [PMID: 17958484 DOI: 10.1162/jocn.2007.19.11.1815] [Citation(s) in RCA: 184] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.
Collapse
Affiliation(s)
- Roxane J Itier
- Rotman Research Institute, Baycrest Centre, Toronto, Canada.
| | | | | | | |
Collapse
|
185
|
The face network: overextended? (Comment on: "Let's face it: It's a cortical network" by Alumit Ishai). Neuroimage 2007; 40:420-422. [PMID: 18243737 DOI: 10.1016/j.neuroimage.2007.11.061] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2007] [Revised: 10/29/2007] [Accepted: 11/01/2007] [Indexed: 11/21/2022] Open
Abstract
We offer a critique of Ishai's [Ishai, A., 2008. Let's face it: it's a cortical network. NeuroImage. doi:10.1016/j.neuroimage.2007.10.040] comment on the value of considering the brain areas that support face perception as a network. We emphasise that this idea is not in opposition to the notion that the fusiform gyrus plays a key role in the visual analysis of faces. More important, we argue that the definition offered of the "extended" face network--areas showing a greater fMRI response to intact than scrambled face images--is too inclusive, and present data to indicate that at least two of the proposed "nodes" of this network also respond to non-face objects (compared to scrambled controls). Finally, we consider briefly how converging methodological approaches may augment the use of fMRI alone in understanding how anatomically widespread brain areas coordinate their activity in order to make sense of the human face.
Collapse
|
186
|
Kriegeskorte N, Formisano E, Sorger B, Goebel R. Individual faces elicit distinct response patterns in human anterior temporal cortex. Proc Natl Acad Sci U S A 2007; 104:20600-5. [PMID: 18077383 PMCID: PMC2154477 DOI: 10.1073/pnas.0705654104] [Citation(s) in RCA: 393] [Impact Index Per Article: 23.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2007] [Indexed: 11/18/2022] Open
Abstract
Visual face identification requires distinguishing between thousands of faces we know. This computational feat involves a network of brain regions including the fusiform face area (FFA) and anterior inferotemporal cortex (aIT), whose roles in the process are not well understood. Here, we provide the first demonstration that it is possible to discriminate cortical response patterns elicited by individual face images with high-resolution functional magnetic resonance imaging (fMRI). Response patterns elicited by the face images were distinct in aIT but not in the FFA. Individual-level face information is likely to be present in both regions, but our data suggest that it is more pronounced in aIT. One interpretation is that the FFA detects faces and engages aIT for identification.
Collapse
Affiliation(s)
- Nikolaus Kriegeskorte
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD 20892, USA.
| | | | | | | |
Collapse
|
187
|
Rotshtein P, Geng JJ, Driver J, Dolan RJ. Role of features and second-order spatial relations in face discrimination, face recognition, and individual face skills: behavioral and functional magnetic resonance imaging data. J Cogn Neurosci 2007; 19:1435-52. [PMID: 17714006 PMCID: PMC2600425 DOI: 10.1162/jocn.2007.19.9.1435] [Citation(s) in RCA: 85] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We compared the contribution of featural information and second-order spatial relations (spacing between features) in face processing. A fully factorial design has the same or different "features" (eyes, mouth, and nose) across two successive displays, whereas, orthogonally, the second-order spatial relations between those features were the same or different. The range of such changes matched the possibilities within the population of natural face images. Behaviorally, we found that judging whether two successive faces depicted the same person was dominated by features, although second-order spatial relations also contributed. This influence of spatial relations correlated, for individual subjects, with their skill at recognition of faces (as famous, or as previously exposed) in separate behavioral tests. Using the same repetition design in functional magnetic resonance imaging, we found feature-dependent effects in the lateral occipital and right fusiform regions. In addition, there were spatial relation effects in the bilateral inferior occipital gyrus and right fusiform that correlated with individual differences in (separately measured) behavioral sensitivity to those changes. The results suggest that featural and second-order spatial relation aspects of faces make distinct contributions to behavioral discrimination and recognition, with features contributing most to face discrimination and second-order spatial relational aspects correlating best with recognition skills. Distinct neural responses to these aspects were found with functional magnetic resonance imaging, particularly when individual skills were taken into account for the impact of second-order spatial relations.
Collapse
|
188
|
Walker PM, Silvert L, Hewstone M, Nobre AC. Social contact and other-race face processing in the human brain. Soc Cogn Affect Neurosci 2007; 3:16-25. [PMID: 19015091 DOI: 10.1093/scan/nsm035] [Citation(s) in RCA: 104] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The present study investigated the influence social factors upon the neural processing of faces of other races using event-related potentials. A multi-tiered approach was used to identify face-specific stages of processing, to test for effects of race-of-face upon processing at these stages and to evaluate the impact of social contact and individuating experience upon these effects. The results showed that race-of-face has significant effects upon face processing, starting from early perceptual stages of structural encoding, and that social factors may play an important role in mediating these effects.
Collapse
Affiliation(s)
- Pamela M Walker
- Department of Experimental Psychology, University of Oxford, South Parks Road, OX1 3UD UK.
| | | | | | | |
Collapse
|
189
|
Philiastides MG, Sajda P. EEG-informed fMRI reveals spatiotemporal characteristics of perceptual decision making. J Neurosci 2007; 27:13082-91. [PMID: 18045902 PMCID: PMC6673396 DOI: 10.1523/jneurosci.3540-07.2007] [Citation(s) in RCA: 144] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2007] [Revised: 09/25/2007] [Accepted: 10/05/2007] [Indexed: 11/21/2022] Open
Abstract
Single-unit and multiunit recordings in primates have already established that decision making involves at least two general stages of neural processing: representation of evidence from early sensory areas and accumulation of evidence to a decision threshold from decision-related regions. However, the relay of information from early sensory to decision areas, such that the accumulation process is instigated, is not well understood. Using a cued paradigm and single-trial analysis of electroencephalography (EEG), we previously reported on temporally specific components related to perceptual decision making. Here, we use information derived from our previous EEG recordings to inform the analysis of fMRI data collected for the same behavioral task to ascertain the cortical origins of each of these EEG components. We demonstrate that a cascade of events associated with perceptual decision making takes place in a highly distributed neural network. Of particular importance is an activation in the lateral occipital complex implicating perceptual persistence as a mechanism by which object decision making in the human brain is instigated.
Collapse
Affiliation(s)
- Marios G. Philiastides
- Laboratory for Intelligent Imaging and Neural Computing, Department of Biomedical Engineering, Columbia University, New York, New York 10027
| | - Paul Sajda
- Laboratory for Intelligent Imaging and Neural Computing, Department of Biomedical Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
190
|
Ishai A. Let's face it: it's a cortical network. Neuroimage 2007; 40:415-419. [PMID: 18063389 DOI: 10.1016/j.neuroimage.2007.10.040] [Citation(s) in RCA: 266] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2007] [Revised: 08/08/2007] [Accepted: 10/27/2007] [Indexed: 01/12/2023] Open
Abstract
Face perception elicits activation within a distributed cortical network in the human brain. The network includes visual ("core") regions, which process invariant facial features, as well as limbic and prefrontal ("extended") regions that process changeable aspects of faces. Analysis of effective connectivity reveals that the major entry node in the "face network" is the lateral fusiform gyrus and that the functional coupling between the core and the extended systems is content-dependent. A model for face perception is proposed, in which the flow of information through the network is shaped by cognitive demands.
Collapse
Affiliation(s)
- Alumit Ishai
- Institute of Neuroradiology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| |
Collapse
|
191
|
Morris JP, McCarthy G. Guided saccades modulate object and face-specific activity in the fusiform gyrus. Hum Brain Mapp 2007; 28:691-702. [PMID: 17133398 PMCID: PMC6871438 DOI: 10.1002/hbm.20301] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
We investigated the influence of saccadic eye movements on the magnitude of functional MRI (fMRI) activation in brain regions known to participate in object and face perception. In separate runs, subjects viewed a static image of a uniform gray field, a face, or a flower. Every 500 ms a small fixation cross made a discrete jump within the image and subjects were required to make a saccade and fixate the cross at its new location. Each run consisted of alternating blocks in which the subject was guided to make small and large saccades. A comparison of large vs. small saccade blocks revealed robust activity in the oculomotor system, particularly within the frontal eye fields (FEF), intraparietal sulcus (IPS), and superior colliculi regardless of the background image. Activity within portions of the ventral occipitotemporal cortex (VOTC) including the lingual and fusiform gyri was also modulated by saccades, but here saccade-related activity was strongly influenced by the background image. Activity within the VOTC was strongest when large saccadic eye movements were made over an image of a face or a flower compared to a uniform gray image. Of most interest was activity in the functionally predefined face-specific region of the fusiform gyrus, where large saccades made over a face increased activity, but where similar large saccades made over a flower or a uniform gray field did not increase activity. These results demonstrate the potentially confounding influence of uncontrolled eye movements for neuroimaging studies of face and object perception.
Collapse
Affiliation(s)
- James P. Morris
- Duke‐UNC Brain Imaging and Analysis Center, Duke University, Durham, North Carolina
| | - Gregory McCarthy
- Duke‐UNC Brain Imaging and Analysis Center, Duke University, Durham, North Carolina
- Department of Veterans Affairs Medical Center, Durham, North Carolina
| |
Collapse
|
192
|
Adolphs R. Recognizing emotion from facial expressions: psychological and neurological mechanisms. ACTA ACUST UNITED AC 2007; 1:21-62. [PMID: 17715585 DOI: 10.1177/1534582302001001003] [Citation(s) in RCA: 746] [Impact Index Per Article: 43.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Recognizing emotion from facial expressions draws on diverse psychological processes implemented in a large array of neural structures. Studies using evoked potentials, lesions, and functional imaging have begun to elucidate some of the mechanisms. Early perceptual processing of faces draws on cortices in occipital and temporal lobes that construct detailed representations from the configuration of facial features. Subsequent recognition requires a set of structures, including amygdala and orbitofrontal cortex, that links perceptual representations of the face to the generation of knowledge about the emotion signaled, a complex set of mechanisms using multiple strategies. Although recent studies have provided a wealth of detail regarding these mechanisms in the adult human brain, investigations are also being extended to nonhuman primates, to infants, and to patients with psychiatric disorders.
Collapse
|
193
|
Abstract
The face inversion effect (FIE) is defined as the larger decrease in recognition performance for faces than for other mono-oriented objects when they are presented upside down. Behavioral studies suggest the FIE takes place at the perceptual encoding stage and is mainly due to the decrease in ability to extract relational information when discriminating individual faces. Recently, functional magnetic resonance imaging and scalp event-related potentials studies found that turning faces upside down slightly but significantly decreases the response of face-selective brain regions, including the so-called fusiform face area (FFA), and increases activity of other areas selective for nonface objects. Face inversion leads to a significantly delayed (sometimes larger) N170 component, an occipito-temporal scalp potential associated with the perceptual encoding of faces and objects. These modulations are in agreement with the perceptual locus of the FIE and reinforce the view that the FFA and N170 are sensitive to individual face discrimination.
Collapse
|
194
|
Hirai M, Hiraki K. Differential neural responses to humans vs. robots: an event-related potential study. Brain Res 2007; 1165:105-15. [PMID: 17658496 DOI: 10.1016/j.brainres.2007.05.078] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2006] [Revised: 05/09/2007] [Accepted: 05/09/2007] [Indexed: 11/25/2022]
Abstract
Do we perceive humanoid robots as human beings? Recent neuroimaging studies have reported similarity in the neural processing of human and robot actions in the superior temporal sulcus area but a differential neural response in the premotor area. These studies suggest that the neural activity of the occipitotemporal region would not be affected by appearance information. Unlike those studies, in this study, by using the inversion effect as an index, we demonstrated for the first time that the appearance information of a presented action affects neural responses in the occipitotemporal region. In event-related potential (ERP) studies, the inversion effect is the phenomenon whereby an upright face- and body-sensitive ERP component in the occipitotemporal region is enhanced and delayed up to 200 ms in response to an inverted face and body, but not to an inverted object. We used three kinds of walking animation with different appearance information (human, robot, and point-light) as well as inverted stimuli of each appearance. The anatomical structure and walking speed of the presented stimuli were all identical. The results showed that the inversion effect occurred in the right occipitotemporal region only in response to human appearance, and not robotic and point-light appearances. That is, the amplitude of the inverted condition of human appearance was significantly larger than that of the upright condition only. Our results, which are contrary to other recent neuroimaging studies, suggested that appearance information affects the neural response in the occipitotemporal region.
Collapse
Affiliation(s)
- Masahiro Hirai
- Course of General Systems Studies, Department of Multi-disciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-4-1 Komaba, Meguro-ku, Tokyo 153-8902, Japan.
| | | |
Collapse
|
195
|
Barbeau EJ, Taylor MJ, Regis J, Marquis P, Chauvel P, Liégeois-Chauvel C. Spatio temporal dynamics of face recognition. Cereb Cortex 2007; 18:997-1009. [PMID: 17716990 DOI: 10.1093/cercor/bhm140] [Citation(s) in RCA: 125] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
To better understand face recognition, it is necessary to identify not only which brain structures are implicated but also the dynamics of the neuronal activity in these structures. Latencies can then be compared to unravel the temporal dynamics of information processing at the distributed network level. To achieve high spatial and temporal resolution, we used intracerebral recordings in epileptic subjects while they performed a famous/unfamiliar face recognition task. The first components peaked at 110 ms in the fusiform gyrus (FG) and simultaneously in the inferior frontal gyrus, suggesting the early establishment of a large-scale network. This was followed by components peaking at 160 ms in 2 areas along the FG. Important stages of distributed parallel processes ensued at 240 and 360 ms involving up to 6 regions along the ventral visual pathway. The final components peaked at 480 ms in the hippocampus. These stages largely overlapped. Importantly, event-related potentials to famous faces differed from unfamiliar faces and control stimuli in all medial temporal lobe structures. The network was bilateral but more right sided. Thus, recognition of famous faces takes place through the establishment of a complex set of local and distributed processes that interact dynamically and may be an emergent property of these interactions.
Collapse
Affiliation(s)
- Emmanuel J Barbeau
- Centre de recherche Cerveau et Cognition, Université Paul Sabatier Toulouse 3, Centre National de Recherche Scientifique, Toulouse, France.
| | | | | | | | | | | |
Collapse
|
196
|
Kreiman G. Single unit approaches to human vision and memory. Curr Opin Neurobiol 2007; 17:471-5. [PMID: 17703936 DOI: 10.1016/j.conb.2007.07.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2007] [Accepted: 07/12/2007] [Indexed: 10/22/2022]
Abstract
Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.
Collapse
Affiliation(s)
- Gabriel Kreiman
- Department of Ophthalmology and Division of Neuroscience, Children's Hospital Boston, Harvard Medical School, Center for Brain Science, Harvard University, 1 Blackfan Circle, Karp 11, Boston, MA 02115,USA.
| |
Collapse
|
197
|
Abstract
BACKGROUND During speech perception, the ability to integrate auditory and visual information causes speech to sound louder and be more intelligible, and leads to quicker processing. This integration is important in early language development, and also continues to affect speech comprehension throughout the lifespan. Previous research shows that individuals with autism have difficulty integrating information, especially across multiple sensory domains. METHODS In the present study, audiovisual speech integration was investigated in 18 adolescents with high-functioning autism and 19 well-matched adolescents with typical development using a speech in noise paradigm. Speech reception thresholds were calculated for auditory only and audiovisual matched speech, and lipreading ability was measured. RESULTS Compared to individuals with typical development, individuals with autism showed less benefit from the addition of visual information in audiovisual speech perception. We also found that individuals with autism were significantly worse than those in the comparison group at lipreading. Hierarchical regression demonstrated that group differences in the audiovisual condition, while influenced by auditory perception and especially by lipreading, were also attributable to a unique factor, which may reflect a specific deficit in audiovisual integration. CONCLUSIONS Combined deficits in audiovisual speech integration and lipreading in individuals with autism are likely to contribute to ongoing difficulties in speech comprehension, and may also be related to delays in early language development.
Collapse
Affiliation(s)
- Elizabeth G Smith
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY 14627, USA
| | | |
Collapse
|
198
|
Abstract
The human body, like the human face, is a rich source of socially relevant information about other individuals. Evidence from studies of both humans and non-human primates points to focal regions of the higher-level visual cortex that are specialized for the visual perception of the body. These body-selective regions, which can be dissociated from regions involved in face perception, have been implicated in the perception of the self and the 'body schema', the perception of others' emotions and the understanding of actions.
Collapse
Affiliation(s)
- Marius V Peelen
- Centre for Cognitive Neuroscience, School of Psychology, Brigantia Building, University of Wales, Bangor, Gwynedd, LL57 2AS, UK
| | | |
Collapse
|
199
|
Carrick OK, Thompson JC, Epling JA, Puce A. It's all in the eyes: neural responses to socially significant gaze shifts. Neuroreport 2007; 18:763-6. [PMID: 17471062 PMCID: PMC2794043 DOI: 10.1097/wnr.0b013e3280ebb44b] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Gaze direction signals another's focus of social attention. Here, we recorded event-related potentials to a multiface display where a gaze aversion created three different social scenarios involving social attention, mutual gaze exchange, and gaze avoidance. N170 was unaffected by social scenario. P350 latency was the shortest in social attention and mutual gaze exchange, whereas P500 was thelargest for gaze avoidance. Our data suggest that neural activity after 300 ms poststimulus may index processes associated with extracting social meaning, whereas that earlier than 300 ms may index processing of gaze change independent of social context.
Collapse
Affiliation(s)
- Olivia K Carrick
- Center for Advanced Imaging, West Virginia University School of Medicine, Morgantown WV, USA
| | - James C Thompson
- Center for Advanced Imaging, West Virginia University School of Medicine, Morgantown WV, USA
- Department of Radiology, West Virginia University School of Medicine, Morgantown WV, USA
| | - James A Epling
- Center for Advanced Imaging, West Virginia University School of Medicine, Morgantown WV, USA
| | - Aina Puce
- Center for Advanced Imaging, West Virginia University School of Medicine, Morgantown WV, USA
- Department of Radiology, West Virginia University School of Medicine, Morgantown WV, USA
- Department of Neurobiology & Anatomy, West Virginia University School of Medicine, Morgantown WV, USA
| |
Collapse
|
200
|
Honda Y, Watanabe S, Nakamura M, Miki K, Kakigi R. Interhemispheric difference for upright and inverted face perception in humans: an event-related potential study. Brain Topogr 2007; 20:31-9. [PMID: 17638065 DOI: 10.1007/s10548-007-0028-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/29/2007] [Indexed: 11/24/2022]
Abstract
We recorded event-related potentials (ERPs) to investigate the interhemispheric difference of the N170 component for upright and inverted face perception in detail in fifteen healthy subjects. This is the first ERP study focusing on interhemispheric differences for face perception by showing faces in the hemifield. The face inversion effect, the prolonged latency and enhanced amplitude were found in both hemispheres. We found that the peak latency of the N170 following both upright and inverted face stimulation showed no significant difference between each hemisphere, though the N170 latency for the inverted face in the left hemisphere was shorter than that in the right hemisphere. The N170 recorded from the hemisphere ipsilateral to the stimulated hemifield showed unique findings. The interhemispheric time difference of the N170 between the right and the left hemispheres when the inverted face was presented in the left hemifield was significantly shorter than in the other three conditions. This unique finding may indicate that the conduction time from the right to the left for inverted face perception is faster than the other conditions, or that the left hemisphere specifically processed the inverted face very rapidly after receiving signals from the right hemisphere. If the N170 was generated by some, at least two, temporally overlapping activities, the different style of a summation of these activities may cause the unique findings found in this study. In conclusion, by presenting face stimuli in the hemifields, we could identify several new findings regarding the N170 component related to the face inversion effect.
Collapse
Affiliation(s)
- Yukiko Honda
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki 444-8585, Japan
| | | | | | | | | |
Collapse
|