51
|
Balas BJ, Schmidt J, Saville A. A face detection bias for horizontal orientations develops in middle childhood. Front Psychol 2015; 6:772. [PMID: 26106349 PMCID: PMC4459095 DOI: 10.3389/fpsyg.2015.00772] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 05/23/2015] [Indexed: 11/13/2022] Open
Abstract
Faces are complex stimuli that can be described via intuitive facial features like the eyes, nose, and mouth, "configural" features like the distances between facial landmarks, and features that correspond to computations performed in the early visual system (e.g., oriented edges). With regard to this latter category of descriptors, adult face recognition relies disproportionately on information in specific spatial frequency and orientation bands: many recognition tasks are performed more accurately when adults have access to mid-range spatial frequencies (8-16 cycles/face) and horizontal orientations (Dakin and Watt, 2009). In the current study, we examined how this information bias develops in middle childhood. We recruited children between the ages of 5-10 years-old to participate in a simple categorization task that required them to label images according to whether they depicted a face or a house. Critically, children were presented with face and house images comprised either of primarily horizontal orientation energy, primarily vertical orientation energy, or both horizontal and vertical orientation energy. We predicted that any bias favoring horizontal information over vertical should be more evident in faces than in houses, and also that older children would be more likely to show such a bias than younger children. We designed our categorization task to be sufficiently easy that children would perform at near-ceiling accuracy levels, but with variation in response times that would reflect how they rely on different orientations as a function of age and object category. We found that horizontal bias for face detection (but not house detection) correlated significantly with age, suggesting an emergent category-specific bias for horizontal orientation energy that develops during middle childhood. These results thus suggest that the tuning of high-level recognition to specific low-level visual features takes place over several years of visual development.
Collapse
Affiliation(s)
- Benjamin J Balas
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Jamie Schmidt
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| | - Alyson Saville
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University Fargo, ND, USA
| |
Collapse
|
52
|
Morin K, Guy J, Habak C, Wilson HR, Pagani L, Mottron L, Bertone A. Atypical Face Perception in Autism: A Point of View? Autism Res 2015; 8:497-506. [DOI: 10.1002/aur.1464] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2014] [Accepted: 12/28/2014] [Indexed: 11/11/2022]
Affiliation(s)
- Karine Morin
- Perceptual Neuroscience Lab (PNLab) for Autism and Development; Montréal Canada
- Ecole de Psychoéducation; Université de Montréal; Montréal Canada
| | - Jacalyn Guy
- Perceptual Neuroscience Lab (PNLab) for Autism and Development; Montréal Canada
- Integrated Program in Neuroscience; McGill University; Montréal Canada
| | - Claudine Habak
- Visual Perception and Psychophysics Lab; Université de Montréal
| | - Hugh R. Wilson
- Center for Vision Research; York University; Toronto Canada
| | - Linda Pagani
- Ecole de Psychoéducation; Université de Montréal; Montréal Canada
| | - Laurent Mottron
- University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM); Montréal Canada
| | - Armando Bertone
- Perceptual Neuroscience Lab (PNLab) for Autism and Development; Montréal Canada
- University of Montreal Center of Excellence for Pervasive Developmental Disorders (CETEDUM); Montréal Canada
- Department of Education and Counselling Psychology; McGill University; Montréal Canada
| |
Collapse
|
53
|
Abstract
Face recognition depends critically on horizontal orientations (Goffaux & Dakin, Frontiers in Psychology, 1(143), 1-14, 2010): Face images that lack horizontal features are harder to recognize than those that have this information preserved. We asked whether facial emotional recognition also exhibits this dependency by asking observers to categorize orientation-filtered happy and sad expressions. Furthermore, we aimed to dissociate image-based orientation energy from object-based orientation by rotating images 90 deg in the picture plane. In our first experiment, we showed that the perception of emotional expression does depend on horizontal orientations, and that object-based orientation constrained performance more than image-based orientation did. In Experiment 2, we showed that mouth openness (i.e., open vs. closed mouths) also influenced the emotion-dependent reliance on horizontal information. Finally, we describe a simple computational analysis that demonstrates that the impact of mouth openness was not predicted by variation in the distribution of orientation energy across horizontal and vertical orientation bands. Overall, our results suggest that emotion recognition largely does depend on horizontal information defined relative to the face, but that this bias is modulated by multiple factors that introduce variation in appearance across and within distinct emotions.
Collapse
|
54
|
Abstract
Acuity is the most commonly used measure of visual function, and reductions in acuity are associated with most eye diseases. Metamorphopsia--a perceived distortion of visual space--is another common symptom of visual impairment and is currently assessed qualitatively using Amsler (1953) charts. In order to quantify the impact of metamorphopsia on acuity, we measured the effect of physical spatial distortion on letter recognition. Following earlier work showing that letter recognition is tuned to specific spatial frequency (SF) channels, we hypothesized that the effect of distortion might depend on the spatial scale of visual distortion just as it depends on the spatial scale of masking noise. Six normally sighted observers completed a 26 alternate forced choice (AFC) Sloan letter identification task at five different viewing distances, and the letters underwent different levels of spatial distortion. Distortion was controlled using spatially band-pass filtered noise that spatially remapped pixel locations. Noise was varied over five spatial frequencies and five magnitudes. Performance was modeled with logistic regression and worsened linearly with increasing distortion magnitude and decreasing letter size. We found that retinal SF affects distortion at midrange frequencies and can be explained with the tuning of a basic contrast sensitivity function, while object-centered distortion SF follows a similar pattern of letter object recognition sensitivity and is tuned to approximately three cycles per letter (CPL). The interaction between letter size and distortion makes acuity an unreliable outcome for metamorphopsia assessment.
Collapse
Affiliation(s)
- Emily Wiecek
- Massachusetts Eye and Ear Infirmary, Boston, MA, USA Department of Ophthalmology, Harvard Medical School, Boston, MA, USA Institute of Ophthalmology, University College London, London, UK
| | - Steven C Dakin
- Institute of Ophthalmology, University College London, London, UK Biomedical Research Centre, Moorfields Eye Hospital, National Institute for Health Research, London, UK Department of Optometry and Vision Science, University of Auckland, New Zealand
| | - Peter Bex
- Massachusetts Eye and Ear Infirmary, Boston, MA, USA Department of Ophthalmology, Harvard Medical School, Boston, MA, USA Department of Psychology, Northeastern University, Boston, MA, USA
| |
Collapse
|
55
|
Collin CA, Rainville S, Watier N, Boutet I. Configural and featural discriminations use the same spatial frequencies: a model observer versus human observer analysis. Perception 2014; 43:509-26. [PMID: 25154285 DOI: 10.1068/p7531] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Previous work has shown mixed results regarding the role of different spatial frequency (SF) ranges in featural and configural processing of faces. Some studies suggest no special role of any given band for either type of processing, while others suggest that low SFs principally support configural analysis. Here we attempt to put this issue on a more rigorous footing by comparing human performance when making featural and configural discriminations with that of a model observer algorithm carrying out the same task. The model uses a simple algorithm that calculates the dot product of a stimulus image with each available potential match image to find the maximally likely match. It thus provides a principled way of analyzing available image information. We find human accuracy peaks at around 10 cycles per face (cpf) regardless of whether featural or configural manipulations are being detected. We also find accuracy peaks in the same part of the spectrum regardless of which feature is manipulated (ie eyes, nose, or mouth). Conversely, model observer performance, measured in terms of white noise tolerance, peaks at approximately 5 cpf, and this value again remains roughly constant regardless of the type of manipulation and feature manipulated. The ratio of the model's noise tolerance to a derived equivalent noise tolerance value for humans peaks at around 10 cpf, similar to the accuracy data. These results provide evidence that the human performance maxima at 10 cpf are not due simply to the physical characteristics of face stimuli, but rather arise due to an interaction between the available information in face images and human perceptual processing.
Collapse
|
56
|
Bonaccorsi J, Berardi N, Sale A. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning. Front Neural Circuits 2014; 8:82. [PMID: 25076874 PMCID: PMC4100600 DOI: 10.3389/fncir.2014.00082] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2014] [Accepted: 06/27/2014] [Indexed: 11/19/2022] Open
Abstract
Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.
Collapse
Affiliation(s)
- Joyce Bonaccorsi
- Department of Medicine, Institute of Neuroscience CNR, National Research Council (CNR) Pisa, Italy
| | - Nicoletta Berardi
- Department of Medicine, Institute of Neuroscience CNR, National Research Council (CNR) Pisa, Italy ; Department of Psychology, Florence University Florence, Italy
| | - Alessandro Sale
- Department of Medicine, Institute of Neuroscience CNR, National Research Council (CNR) Pisa, Italy
| |
Collapse
|
57
|
Taylor CP, Bennett PJ, Sekuler AB. Evidence for adjustable bandwidth orientation channels. Front Psychol 2014; 5:578. [PMID: 24971069 PMCID: PMC4054014 DOI: 10.3389/fpsyg.2014.00578] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2014] [Accepted: 05/23/2014] [Indexed: 11/13/2022] Open
Abstract
The standard model of early vision claims that orientation and spatial frequency are encoded with multiple, quasi-independent channels that have fixed spatial frequency and orientation bandwidths. The standard model was developed using detection and discrimination data collected from experiments that used deterministic patterns such as Gabor patches and gratings used as stimuli. However, detection data from experiments using noise as a stimulus suggests that the visual system may use adjustable-bandwidth, rather than fixed-bandwidth, channels. In our previous work, we used classification images as a key piece of evidence against the hypothesis that pattern detection is based on the responses of channels with an adjustable spatial frequency bandwidth. Here we tested the hypothesis that channels with adjustable orientation bandwidths are used to detect two-dimensional, filtered noise targets that varied in orientation bandwidth and were presented in white noise. Consistent with our previous work that examined spatial frequency bandwidth, we found that detection thresholds were consistent with the hypothesis that observers sum information across a broad range of orientations nearly optimally: absolute efficiency for stimulus detection was 20-30% and approximately constant across a wide range of orientation bandwidths. Unlike what we found with spatial frequency bandwidth, the results of our classification image experiment were consistent with the hypothesis that the orientation bandwidth of internal filters were adjustable. Thus, for orientation summation, both detection thresholds and classification images support the adjustable channels hypothesis. Classification images also revealed hallmarks of inhibition or suppression from uninformative spatial frequencies and/or orientations. This work highlights the limitations of the standard model of summation for orientation. The standard model of orientation summation and tuning was chiefly developed with narrow-band stimuli that were not presented in noise, stimuli that are arguably less naturalistic than the variable bandwidth stimuli presented in noise used in our experiments. Finally, the disagreement between the results from our experiments on spatial frequency summation with the data presented in this paper suggests that orientation may be encoded more flexibly than spatial frequency channels.
Collapse
Affiliation(s)
- Christopher P. Taylor
- Department of Psychology and Clinical Language Sciences, Centre for Integrative Neuroscience and Neurodynamics, University of ReadingReading, UK
| | - Patrick J. Bennett
- Department of Psychology, Neuroscience, and Behaviour, McMaster UniversityHamilton, ON, Canada
- Centre for Vision Research, York UniversityToronto, ON, Canada
| | - Allison B. Sekuler
- Department of Psychology, Neuroscience, and Behaviour, McMaster UniversityHamilton, ON, Canada
- Centre for Vision Research, York UniversityToronto, ON, Canada
| |
Collapse
|
58
|
Corradi-Dell'acqua C, Schwartz S, Meaux E, Hubert B, Vuilleumier P, Deruelle C. Neural responses to emotional expression information in high- and low-spatial frequency in autism: evidence for a cortical dysfunction. Front Hum Neurosci 2014; 8:189. [PMID: 24782735 PMCID: PMC3988374 DOI: 10.3389/fnhum.2014.00189] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2013] [Accepted: 03/14/2014] [Indexed: 11/21/2022] Open
Abstract
Despite an overall consensus that Autism Spectrum Disorder (ASD) entails atypical processing of human faces and emotional expressions, the role of neural structures involved in early facial processing remains unresolved. An influential model for the neurotypical brain suggests that face processing in the fusiform gyrus and the amygdala is based on both high-spatial frequency (HSF) information carried by a parvocellular pathway, and low-spatial frequency (LSF) information separately conveyed by a magnocellular pathway. Here, we tested the fusiform gyrus and amygdala sensitivity to emotional face information conveyed by these distinct pathways in ASD individuals (and matched Controls). During functional Magnetical Resonance Imaging (fMRI), participants reported the apparent gender of hybrid face stimuli, made by merging two different faces (one in LSF and the other in HSF), out of which one displayed an emotional expression (fearful or happy) and the other was neutral. Controls exhibited increased fusiform activity to hybrid faces with an emotional expression (relative to hybrids composed only with neutral faces), regardless of whether this was conveyed by LSFs or HSFs in hybrid stimuli. ASD individuals showed intact fusiform response to LSF, but not HSF, expressions. Furthermore, the amygdala (and the ventral occipital cortex) was more sensitive to HSF than LSF expressions in Controls, but exhibited an opposite preference in ASD. Our data suggest spared LSF face processing in ASD, while cortical analysis of HSF expression cues appears affected. These findings converge with recent accounts suggesting that ASD might be characterized by a difficulty in integrating multiple local information and cause global processing troubles unexplained by losses in low spatial frequency inputs.
Collapse
Affiliation(s)
- Corrado Corradi-Dell'acqua
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology, University Medical Center Geneva, Switzerland
| | - Sophie Schwartz
- Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology, University Medical Center Geneva, Switzerland
| | - Emilie Meaux
- Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology, University Medical Center Geneva, Switzerland
| | - Bénedicte Hubert
- Hôpital Rivière-de-Praires, University of Montréal Montréal, QC, Canada ; CNRS, Institut de Neurosciences de la Timone, Aix-Marseille Université Marseille, France
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva Geneva, Switzerland ; Laboratory for Neurology and Imaging of Cognition, Department of Neuroscience and Clinic of Neurology, University Medical Center Geneva, Switzerland
| | - Christine Deruelle
- CNRS, Institut de Neurosciences de la Timone, Aix-Marseille Université Marseille, France
| |
Collapse
|
59
|
Wallis TSA, Taylor CP, Wallis J, Jackson ML, Bex PJ. Characterization of field loss based on microperimetry is predictive of face recognition difficulties. Invest Ophthalmol Vis Sci 2014; 55:142-53. [PMID: 24302589 DOI: 10.1167/iovs.13-12420] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To determine how visual field loss as assessed by microperimetry is correlated with deficits in face recognition. METHODS Twelve patients (age range, 26-70 years) with impaired visual sensitivity in the central visual field caused by a variety of pathologies and 12 normally sighted controls (control subject [CS] group; age range, 20-68 years) performed a face recognition task for blurred and unblurred faces. For patients, we assessed central visual field loss using microperimetry, fixation stability, Pelli-Robson contrast sensitivity, and letter acuity. RESULTS Patients were divided into two groups by microperimetry: a low vision (LV) group (n = 8) had impaired sensitivity at the anatomical fovea and/or poor fixation stability, whereas a low vision that excluded the fovea (LV:F) group (n = 4) was characterized by at least some residual foveal sensitivity but insensitivity in other retinal regions. The LV group performed worse than the other groups at all blur levels, whereas the performance of the LV:F group was not credibly different from that of the CS group. The performance of the CS and LV:F groups deteriorated as blur increased, whereas the LV group showed consistently poor performance regardless of blur. Visual acuity and fixation stability were correlated with face recognition performance. CONCLUSIONS Persons diagnosed as having disease affecting the central visual field can recognize faces as well as persons with no visual disease provided that they have residual sensitivity in the anatomical fovea and show stable fixation patterns. Performance in this task is limited by the upper resolution of nonfoveal vision or image blur, whichever is worse.
Collapse
Affiliation(s)
- Thomas S A Wallis
- Schepens Eye Research Institute, Harvard Medical School, Boston, Massachusetts
| | | | | | | | | |
Collapse
|
60
|
Abstract
We investigated recognition of blurry faces and whether viewing size affects identification of such severely degraded images. Despite the common belief that face perception relies on middle spatial frequencies, the critical spatial frequency band for face recognition is not fixed but rather depends on size. This is especially pronounced at small sizes, where observers choose to utilize lower, rather than middle, frequencies to identify a face. Here we assessed recognition of identity via a novel use of the face adaptation paradigm. We examined face identity aftereffects of blurry and intact adaptors at two sizes. Intact adaptors induced significant aftereffects regardless of size. Small, but not large, blurry adaptors produced aftereffects despite the fact that both contained exactly the same level of facial detail. This suggests an inability to utilize low-frequency information for perceiving identity in large faces. We conclude that (1) size is a key factor in human face recognition processes and (2) coarse facial images are better recognized at small sizes.
Collapse
Affiliation(s)
- Kimeya Shahangian
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 818 West 10th Avenue, Vancouver, BC V5Z 1M9, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 818 West 10th Avenue, Vancouver, BC V5Z 1M9, Canada
| |
Collapse
|
61
|
Abstract
Identification thresholds and the corresponding efficiencies (ideal/human thresholds) are typically computed by collapsing data across an entire stimulus set within a given task in order to obtain a "multiple-item" summary measure of information use. However, some individual stimuli may be processed more efficiently than others, and such differences are not captured by conventional multiple-item threshold measurements. Here, we develop and present a technique for measuring "single-item" identification efficiencies. The resulting measure describes the ability of the human observer to make use of the information provided by a single stimulus item within the context of the larger set of stimuli. We applied this technique to the identification of 3-D rendered objects (Exp. 1) and Roman alphabet letters (Exp. 2). Our results showed that efficiency can vary markedly across stimuli within a given task, demonstrating that single-item efficiency measures can reveal important information that is lost by conventional multiple-item efficiency measures.
Collapse
|
62
|
Anderson ND, Gleddie C. Comparing sensitivity to facial asymmetry and facial identity. Iperception 2013; 4:396-406. [PMID: 24349698 PMCID: PMC3859556 DOI: 10.1068/i0604] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2013] [Revised: 07/15/2013] [Indexed: 12/01/2022] Open
Abstract
Bilateral symmetry is a facial feature that plays an important role in the aesthetic judgments of faces. The extent to which symmetry contributes to the identification of faces is less clear. We investigated the relationship between facial asymmetry and identity using synthetic face stimuli where the geometric identity of the face can be precisely controlled. Thresholds for all observers were 2 times lower for discriminating facial asymmetry than they were for discriminating facial identity. The advantage for discriminating asymmetrical forms was not observed using nonface shape stimuli, suggesting this advantage is face-specific. Moreover, asymmetry thresholds were not affected when faces were either inverted or constructed about a nonmean face. These results, taken together, suggest that facial asymmetry is a characteristic that we are exquisitely sensitive to, and that may not contribute to face identification. This conclusion is consistent with neuroimaging evidence that suggests that face symmetry and face identity are processed by different neural mechanisms.
Collapse
Affiliation(s)
- Nicole D Anderson
- Department of Psychology, MacEwan University, CCC 10700-104 Ave., Edmonton, AB T5J 4S2, Canada; e-mail:
| | - Chris Gleddie
- Department of Psychology, MacEwan University, CCC 10700-104 Ave., Edmonton, AB T5J 4S2, Canada; e-mail:
| |
Collapse
|
63
|
Gao X, Wilson HR. Implicit learning of geometric eigenfaces. Vision Res 2013; 99:12-8. [PMID: 23911769 DOI: 10.1016/j.visres.2013.07.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Revised: 07/15/2013] [Accepted: 07/19/2013] [Indexed: 11/30/2022]
Abstract
The human visual system can implicitly extract a prototype of encountered visual objects (Posner & Keele, 1968). While learning a prototype provides an efficient way of encoding objects at the category level, discrimination among individual objects requires encoding of variations among them as well. Here we show that in addition to the prototype, human adults also implicitly learn the feature correlations that capture the most significant geometric variations among faces. After studying a group of synthetic faces, observers mistook as seen previously unseen faces representing the first two principal components (eigenfaces, Turk & Pentland, 1991) of the studied faces at significantly higher rates than the correct recognition of the faces actually studied. Implicit learning of the most significant eigenfaces provides an optimal way for encoding variations among faces. The data thus extend the types of summary statistics that can be implicitly extracted by the visual system to include several principal components.
Collapse
Affiliation(s)
- Xiaoqing Gao
- Centre for Vision Research, York University, Canada.
| | | |
Collapse
|
64
|
Gao X, Wilson HR. The neural representation of face space dimensions. Neuropsychologia 2013; 51:1787-93. [PMID: 23850598 DOI: 10.1016/j.neuropsychologia.2013.07.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2013] [Revised: 06/17/2013] [Accepted: 07/01/2013] [Indexed: 11/26/2022]
Abstract
Functional neural imaging studies have identified a network of brain areas that are more active to faces than to other objects. However, it remains largely unclear how these areas encode individual facial identity. To investigate the neural representations of facial identity, we constructed a multidimensional face space structure, whose dimensions were derived from geometric information of faces using the Principal Component Analysis (PCA). Using fMRI, we recorded participants' neural responses when viewing blocks of faces that differed only on one dimension within a block. Although the response magnitudes to different blocks of faces did not differ in a univariate analysis, multi-voxel pattern analysis revealed distinct patterns related to different face space dimensions in brain areas that have a higher response magnitude to faces than to other objects. The results indicate that dimensions of the face space are encoded in the face-selective brain areas in a spatially distributed way.
Collapse
Affiliation(s)
- Xiaoqing Gao
- Centre for Vision Research, York University, 4700 Keele Street, Toronto, Ontario, Canada M3J 1P3.
| | | |
Collapse
|
65
|
de Heering A, Maurer D. The effect of spatial frequency on perceptual learning of inverted faces. Vision Res 2013; 86:107-14. [PMID: 23643906 DOI: 10.1016/j.visres.2013.04.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2011] [Revised: 04/22/2013] [Accepted: 04/24/2013] [Indexed: 11/19/2022]
Abstract
We investigated the efficacy of training adults to recognize full spectrum inverted faces presented with different viewpoints. To examine the role of different spatial frequencies in any learning, we also used high-pass filtered faces that preserved featural information and low-pass filtered faces that severely reduced that featural information. Although all groups got faster over the 2 days of training, there was more improvement in accuracy for the group exposed to full spectrum faces than in the two groups exposed to filtered faces, both of which improved more modestly and only when the same faces were shown on the 2 days of training. For the group exposed to the full spectrum range and, to a lesser extent, for those exposed to high frequency faces, training generalized to a new set of full spectrum faces of a different size in a different task, but did not lead to evidence of holistic processing or improved sensitivity to feature shape or spacing in inverted faces. Overall these results demonstrate that only 2h of practice in recognizing full-spectrum inverted faces presented from multiple points of view is sufficient to improve recognition of the trained faces and to generalize to novel instances. Perceptual learning also occurred for low and high frequency faces but to a smaller extent.
Collapse
|
66
|
Gold JM, Barker JD, Barr S, Bittner JL, Bromfield WD, Chu N, Goode RA, Lee D, Simmons M, Srinath A. The efficiency of dynamic and static facial expression recognition. J Vis 2013; 13:13.5.23. [PMID: 23620533 DOI: 10.1167/13.5.23] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Unlike frozen snapshots of facial expressions that we often see in photographs, natural facial expressions are dynamic events that unfold in a particular fashion over time. But how important are the temporal properties of expressions for our ability to reliably extract information about a person's emotional state? We addressed this question experimentally by gauging human performance in recognizing facial expressions with varying temporal properties relative to that of a statistically optimal ("ideal") observer. We found that people recognized emotions just as efficiently when viewing them as naturally evolving dynamic events, temporally reversed events, temporally randomized events, or single images frozen in time. Our results suggest that the dynamic properties of human facial movements may play a surprisingly small role in people's ability to infer the emotional states of others from their facial expressions.
Collapse
Affiliation(s)
- Jason M Gold
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
67
|
Implicit face prototype learning from geometric information. Vision Res 2013; 82:1-12. [DOI: 10.1016/j.visres.2013.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2011] [Revised: 01/30/2013] [Accepted: 02/02/2013] [Indexed: 11/23/2022]
|
68
|
Awasthi B, Sowman PF, Friedman J, Williams MA. Distinct spatial scale sensitivities for early categorization of faces and places: neuromagnetic and behavioral findings. Front Hum Neurosci 2013; 7:91. [PMID: 23519842 PMCID: PMC3604654 DOI: 10.3389/fnhum.2013.00091] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2012] [Accepted: 03/04/2013] [Indexed: 11/21/2022] Open
Abstract
Research exploring the role of spatial frequencies in rapid stimulus detection and categorization report flexible reliance on specific spatial frequency (SF) bands. Here, through a set of behavioral and magnetoencephalography (MEG) experiments, we investigated the role of low spatial frequency (LSF) (<8 cycles/face) and high spatial frequency (HSF) (>25 cycles/face) information during the categorization of faces and places. Reaction time measures revealed significantly faster categorization of faces driven by LSF information, while rapid categorization of places was facilitated by HSF information. The MEG study showed significantly earlier latency of the M170 component for LSF faces compared to HSF faces. Moreover, the M170 amplitude was larger for LSF faces than for LSF places, whereas the reverse pattern was evident for HSF faces and places. These results suggest that SF modulates the processing of category specific information for faces and places.
Collapse
|
69
|
Peters JC, Vlamings P, Kemner C. Neural processing of high and low spatial frequency information in faces changes across development: qualitative changes in face processing during adolescence. Eur J Neurosci 2013; 37:1448-57. [DOI: 10.1111/ejn.12172] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2012] [Revised: 01/15/2013] [Accepted: 01/28/2013] [Indexed: 11/26/2022]
Affiliation(s)
| | - Petra Vlamings
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience; Maastricht University; Maastricht; The Netherlands
| | | |
Collapse
|
70
|
Pachai MV, Sekuler AB, Bennett PJ. Sensitivity to Information Conveyed by Horizontal Contours is Correlated with Face Identification Accuracy. Front Psychol 2013; 4:74. [PMID: 23444233 PMCID: PMC3580391 DOI: 10.3389/fpsyg.2013.00074] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2012] [Accepted: 02/03/2013] [Indexed: 11/13/2022] Open
Abstract
We measured thresholds in a 1-of-10 face identification task in which stimuli were embedded in orientation-filtered Gaussian noise. For upright faces, the threshold elevation produced by the masking noise varied as a function of noise orientation: significantly greater masking was obtained with horizontal noise than with vertical noise. However, the orientation selectivity of masking was significantly less with inverted faces. The performance of an ideal observer was qualitatively similar to human observers viewing upright faces: the masking function exhibited a peak for horizontally oriented noise although the selectivity of masking was greater than what was observed in human observers. These results imply that significantly more information about facial identity was conveyed by horizontal contours than by vertical contours, and that human observers use this information more efficiently to identify upright faces than inverted faces. We also found a significant positive correlation between selectivity for horizontal information and face identification accuracy for upright, but not inverted faces. Finally, there was a significant positive correlation between horizontal tuning and the size of the face inversion effect. These results demonstrate that the use of information conveyed by horizontal contours is associated with face identification accuracy and the magnitude of the face inversion effect.
Collapse
Affiliation(s)
- Matthew V Pachai
- Department of Psychology, Neuroscience, and Behaviour, McMaster University Hamilton, ON, Canada
| | | | | |
Collapse
|
71
|
Chen CC, Chen CM, Tyler CW. Depth structure from asymmetric shading supports face discrimination. PLoS One 2013; 8:e55865. [PMID: 23457484 PMCID: PMC3573058 DOI: 10.1371/journal.pone.0055865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2012] [Accepted: 01/03/2013] [Indexed: 11/19/2022] Open
Abstract
To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component.
Collapse
Affiliation(s)
- Chien-Chung Chen
- Department of Psychology, National Taiwan University, Taipei, Taiwan.
| | | | | |
Collapse
|
72
|
Brimijoin WO, Akeroyd MA, Tilbury E, Porr B. The internal representation of vowel spectra investigated using behavioral response-triggered averaging. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:EL118-EL122. [PMID: 23363191 PMCID: PMC3864535 DOI: 10.1121/1.4778264] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Listeners presented with noise were asked to press a key whenever they heard the vowels [a] or [i:]. The noise had a random spectrum, with levels in 60 frequency bins changing every 0.5 s. Reverse correlation was used to average the spectrum of the noise prior to each key press, thus estimating the features of the vowels for which the participants were listening. The formant frequencies of these reverse-correlated vowels were similar to those of their respective whispered vowels. The success of this response-triggered technique suggests that it may prove useful for estimating other internal representations, including perceptual phenomena like tinnitus.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 16 Alexandra Parade, Glasgow G31 2ER, United Kingdom.
| | | | | | | |
Collapse
|
73
|
Buffat S, Plantier J, Roumes C, Lorenceau J. Repetition blindness for natural images of objects with viewpoint changes. Front Psychol 2013; 3:622. [PMID: 23346069 PMCID: PMC3551441 DOI: 10.3389/fpsyg.2012.00622] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Accepted: 12/30/2012] [Indexed: 11/13/2022] Open
Abstract
When stimuli are repeated in a rapid serial visual presentation (RSVP), observers sometimes fail to report the second occurrence of a target. This phenomenon is referred to as “repetition blindness” (RB). We report an RSVP experiment with photographs in which we manipulated object viewpoints between the first and second occurrences of a target (0°, 45°, or 90° changes), and spatial frequency (SF) content. Natural images were spatially filtered to produce low, medium, or high SF stimuli. RB was observed for all filtering conditions. Surprisingly, for full-spectrum (FS) images, RB increased significantly as the viewpoint reached 90°. For filtered images, a similar pattern of results was found for all conditions except for medium SF stimuli. These findings suggest that object recognition in RSVP are subtended by viewpoint-specific representations for all spatial frequencies except medium ones.
Collapse
Affiliation(s)
- Stéphane Buffat
- Institut de Recherche Biomédicale des Armées Brétigny sur Orge, France
| | | | | | | |
Collapse
|
74
|
Vesker M, Wilson HR. Face context advantage explained by vernier and separation discrimination acuity. Front Psychol 2013; 3:617. [PMID: 23346066 PMCID: PMC3549620 DOI: 10.3389/fpsyg.2012.00617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Accepted: 12/23/2012] [Indexed: 11/21/2022] Open
Abstract
Seeing facial features in the context of a full face is known to provide an advantage for perception. Using an interocular separation perception task we confirmed that seeing eyes within the context of a face improves discrimination in synthetic faces. We also show that this improvement of the face context can be explained using the presence of individual components of the face such as the nose mouth, or head-outline. We demonstrate that improvements due to the presence of the nose, and head-outline can be explained in terms of two-point separation measurements, obeying Weber’s law as established in the literature. We also demonstrate that performance improvements due to the presence of the mouth can be explained in terms of Vernier acuity judgments between eye positions and the corners of the mouth. Overall, our study shows that the improvements in perception of facial features due to the face context effect can be traced to well understood basic visual measurements that may play a very general role in perceptual measurements of distance. Deficiencies in these measurements may also play a role in prosopagnosia. Additionally, we show interference of the eyebrows with the face-inversion effect for interocular discrimination.
Collapse
Affiliation(s)
- Michael Vesker
- Department of Biology, Centre for Vision Research, York University Toronto, ON, Canada
| | | |
Collapse
|
75
|
Abstract
Most living things and many nonliving things deform as they move, requiring observers to separate object motions from object deformations. When the object is partially occluded, the task becomes more difficult because it is not possible to use two-dimensional (2-D) contour correlations (Cohen, Jain, & Zaidi, 2010). That leaves dynamic depth matching across the unoccluded views as the main possibility. We examined the role of stereo cues in extracting motion of partially occluded and deforming three-dimensional (3-D) objects, simulated by disk-shaped random-dot stereograms set at randomly assigned depths and placed uniformly around a circle. The stereo-disparities of the disks were temporally oscillated to simulate clockwise or counterclockwise rotation of the global shape. To dynamically deform the global shape, random disparity perturbation was added to each disk's depth on each stimulus frame. At low perturbation, observers reported rotation directions consistent with the global shape, even against local motion cues, but performance deteriorated at high perturbation. Using 3-D global shape correlations, we formulated an optimal Bayesian discriminator for rotation direction. Based on rotation discrimination thresholds, human observers were 75% as efficient as the optimal model, demonstrating that global shapes derived from stereo cues facilitate inferences of object motions. To complement reports of stereo and motion integration in extrastriate cortex, our results suggest the possibilities that disparity selectivity and feature tracking are linked, or that global motion selective neurons can be driven purely from disparity cues.
Collapse
Affiliation(s)
- Anshul Jain
- Graduate Center for Vision Research, SUNY College of Optometry, New York, NY, USA.
| | | |
Collapse
|
76
|
Nagai M, Bennett PJ, Rutherford MD, Gaspar CM, Kumada T, Sekuler AB. Comparing face processing strategies between typically-developed observers and observers with autism using sub-sampled-pixels presentation in response classification technique. Vision Res 2013; 79:27-35. [PMID: 23321026 DOI: 10.1016/j.visres.2013.01.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2012] [Revised: 12/30/2012] [Accepted: 01/03/2013] [Indexed: 01/20/2023]
Abstract
In the present study we modified the standard classification image method by subsampling visual stimuli to provide us with a technique capable of examining an individual's face-processing strategy in detail with fewer trials. Experiment 1 confirmed that one testing session (1450 trials) was sufficient to produce classification images that were qualitatively similar to those obtained previously with 10,000 trials (Sekuler et al., 2004). Experiment 2 used this method to compare classification images obtained from observers with autism spectrum disorders (ASD) and typically-developing (TD) observers. As was found in Experiment 1, classification images obtained from TD observers suggested that they all discriminated faces based on information conveyed by pixels in the eyes/brow region. In contrast, classification images obtained from ASD observers suggested that they used different perceptual strategies: three out of five ASD observers used a typical strategy of making use of information in the eye/brow region, but two used an atypical strategy that relied on information in the forehead region. The advantage of using the response classification technique is that there is no restriction to specific theoretical perspectives or a priori hypotheses, which enabled us to see unexpected strategies, like ASD's forehead strategy, and thus showed this technique is particularly useful in the examination of special populations.
Collapse
Affiliation(s)
- Masayoshi Nagai
- National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba Central 6, 1-1-1 Higashi, Tsukuba, Ibaraki 305-8566, Japan.
| | | | | | | | | | | |
Collapse
|
77
|
Collin CA, Therrien ME, Campbell KB, Hamm JP. Effects of band-pass spatial frequency filtering of face and object images on the amplitude of N170. Perception 2012; 41:717-32. [PMID: 23094460 DOI: 10.1068/p7056] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Previous studies have suggested that physiological responses are greatest and face recognition performance is best when a band of middle relative spatial frequencies (SFs) is included in stimuli. Conversely, behavioural data suggest that object recognition performance shows comparatively little effect of SF variations. Here, we examine the effects of SF filtering on the amplitude of the N170 ERP component when participants are shown images of faces and objects. Our findings show that with face stimuli the amplitude of N170 exhibits a band-pass modulation function, with responses to middle SFs (around 11 cycles per face) being statistically indistinguishable from responses to full-band faces. In contrast to faces, object stimuli elicited a relatively flat function across much of the spectrum. However, for both faces and objects, middle spatial frequencies were sufficient to elicit the same N170 magnitude as full-band images. Our results with face stimuli are in accordance with previous work examining single-cell and MEG responses. Our results with objects are compatible with previous behavioural work showing a relative robustness of object recognition to SF manipulations. Our findings are novel in showing that the middle band elicits the same N170 as full-band images in both faces and objects.
Collapse
Affiliation(s)
- Charles A Collin
- School of Psychology, University of Ottawa, Ottawa, ON K1N 6N5, Canada.
| | | | | | | |
Collapse
|
78
|
Looking just below the eyes is optimal across face recognition tasks. Proc Natl Acad Sci U S A 2012; 109:E3314-23. [PMID: 23150543 DOI: 10.1073/pnas.1214269109] [Citation(s) in RCA: 156] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
When viewing a human face, people often look toward the eyes. Maintaining good eye contact carries significant social value and allows for the extraction of information about gaze direction. When identifying faces, humans also look toward the eyes, but it is unclear whether this behavior is solely a byproduct of the socially important eye movement behavior or whether it has functional importance in basic perceptual tasks. Here, we propose that gaze behavior while determining a person's identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in these evolutionarily important perceptual tasks. We show that humans move their eyes to locations that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal fixation points, which differ moderately across tasks, are predicted correctly by a Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea toward the visual periphery (foveated ideal observer). Neither a model that disregards the foveated nature of the visual system and makes fixations on the local region with maximal information, nor a model that makes center-of-gravity fixations correctly predict human eye movements. Extension of the foveated ideal observer framework to a large database of real-world faces shows that the optimality of these strategies generalizes across the population. These results suggest that the human visual system optimizes face recognition performance through guidance of eye movements not only toward but, more precisely, just below the eyes.
Collapse
|
79
|
Wiecek E, Jackson ML, Dakin SC, Bex P. Visual search with image modification in age-related macular degeneration. Invest Ophthalmol Vis Sci 2012; 53:6600-9. [PMID: 22930725 DOI: 10.1167/iovs.12-10012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE AMD results in loss of central vision and a dependence on low-resolution peripheral vision. While many image enhancement techniques have been proposed, there is a lack of quantitative comparison of the effectiveness of enhancement. We developed a natural visual search task that uses patients' eye movements as a quantitative and functional measure of the efficacy of image modification. METHODS Eye movements of 17 patients (mean age = 77 years) with AMD were recorded while they searched for target objects in natural images. Eight different image modification methods were implemented and included manipulations of local image or edge contrast, color, and crowding. In a subsequent task, patients ranked their preference of the image modifications. RESULTS Within individual participants, there was no significant difference in search duration or accuracy across eight different image manipulations. When data were collapsed across all image modifications, a multivariate model identified six significant predictors for normalized search duration including scotoma size and acuity, as well as interactions among scotoma size, age, acuity, and contrast (P < 0.05). Additionally, an analysis of image statistics showed no correlation with search performance across all image modifications. Rank ordering of enhancement methods based on participants' preference revealed a trend that participants preferred the least modified images (P < 0.05). CONCLUSIONS There was no quantitative effect of image modification on search performance. A better understanding of low- and high-level components of visual search in natural scenes is necessary to improve future attempts at image enhancement for low vision patients. Different search tasks may require alternative image modifications to improve patient functioning and performance.
Collapse
Affiliation(s)
- Emily Wiecek
- Massachusetts Eye and Ear Infirmary, 20 Staniford Street, Boston, MA 02118, USA.
| | | | | | | |
Collapse
|
80
|
Morissette L, Chartier S, Vandermeulen R, Watier N. Depth of treatment sensitive noise resistant dynamic artificial neural networks model of recall in people with prosopagnosia. Neural Netw 2012; 32:46-56. [DOI: 10.1016/j.neunet.2012.02.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2011] [Revised: 01/14/2012] [Accepted: 02/07/2012] [Indexed: 11/25/2022]
|
81
|
The response of face-selective cortex with single face parts and part combinations. Neuropsychologia 2012; 50:2454-9. [PMID: 22750118 DOI: 10.1016/j.neuropsychologia.2012.06.016] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 05/21/2012] [Accepted: 06/18/2012] [Indexed: 11/20/2022]
Abstract
A critical issue in object recognition research is how the parts of an object are analyzed by the visual system and combined into a perceptual whole. However, most of the previous research has examined how changes to object parts influence recognition of the whole, rather than recognition of the parts themselves. This is particularly true of the research on face recognition, and especially with questions related to the neural substrates. Here, we investigated patterns of BOLD fMRI brain activation with internal face parts (features) presented singly and in different combinations. A preference for single features over combinations was found in the occipital face area (OFA) as well as a preference for the two-eyes combination stimulus over other combination stimulus types. The fusiform face area (FFA) and lateral occipital cortex (LO) showed no preferences among the single feature and combination stimulus types. The results are consistent with a growing view that the OFA represents processes involved in early, feature-based analysis.
Collapse
|
82
|
Awasthi B, Friedman J, Williams MA. Reach trajectories reveal delayed processing of low spatial frequency faces in developmental prosopagnosia. Cogn Neurosci 2012; 3:120-30. [DOI: 10.1080/17588928.2012.673482] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
83
|
Hansen BC, Hess RF. On the effectiveness of noise masks: naturalistic vs. un-naturalistic image statistics. Vision Res 2012; 60:101-13. [PMID: 22484251 DOI: 10.1016/j.visres.2012.03.017] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2011] [Revised: 01/10/2012] [Accepted: 03/23/2012] [Indexed: 11/28/2022]
Abstract
It has been argued that the human visual system is optimized for identification of broadband objects embedded in stimuli possessing orientation averaged power spectra fall-offs that obey the 1/f(β) relationship typically observed in natural scene imagery (i.e., β=2.0 on logarithmic axes). Here, we were interested in whether individual spatial channels leading to recognition are functionally optimized for narrowband targets when masked by noise possessing naturalistic image statistics (β=2.0). The current study therefore explores the impact of variable β noise masks on the identification of narrowband target stimuli ranging in spatial complexity, while simultaneously controlling for physical or perceived differences between the masks. The results show that β=2.0 noise masks produce the largest identification thresholds regardless of target complexity, and thus do not seem to yield functionally optimized channel processing. The differential masking effects are discussed in the context of contrast gain control.
Collapse
Affiliation(s)
- Bruce C Hansen
- Department of Psychology & Neuroscience Program, Colgate University, Hamilton, NY 13346, USA.
| | | |
Collapse
|
84
|
Gold JM, Mundy PJ, Tjan BS. The perception of a face is no more than the sum of its parts. Psychol Sci 2012; 23:427-34. [PMID: 22395131 DOI: 10.1177/0956797611427407] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
When you see a person's face, how do you go about combining his or her facial features to make a decision about who that person is? Most current theories of face perception assert that the ability to recognize a human face is not simply the result of an independent analysis of individual features, but instead involves a holistic coding of the relationships among features. This coding is thought to enhance people's ability to recognize a face beyond what would be expected if each feature were shown in isolation. In the study reported here, we explicitly tested this idea by comparing human performance on facial-feature integration with that of an optimal Bayesian integrator. Contrary to the predictions of most current notions of face perception, our findings showed that human observers integrate facial features in a manner that is no better than would be predicted by their ability to use each individual feature when shown in isolation. That is, a face is perceived no better than the sum of its individual parts.
Collapse
Affiliation(s)
- Jason M Gold
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
| | | | | |
Collapse
|
85
|
Abstracts of the British Society of Audiology annual conference (incorporating the Experimental and Clinical Short papers meetings). Int J Audiol 2012. [DOI: 10.3109/14992027.2012.653103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
86
|
Abstract
Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.
Collapse
Affiliation(s)
- Shichuan Du
- The Ohio State University, Columbus, OH 43210, USA
| | | |
Collapse
|
87
|
Lee Y, Grady CL, Habak C, Wilson HR, Moscovitch M. Face Processing Changes in Normal Aging Revealed by fMRI Adaptation. J Cogn Neurosci 2011; 23:3433-47. [DOI: 10.1162/jocn_a_00026] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Abstract
We investigated the neural correlates of facial processing changes in healthy aging using fMRI and an adaptation paradigm. In the scanner, participants were successively presented with faces that varied in identity, viewpoint, both, or neither and performed a head size detection task independent of identity or viewpoint. In right fusiform face area (FFA), older adults failed to show adaptation to the same face repeatedly presented in the same view, which elicited the most adaptation in young adults. We also performed a multivariate analysis to examine correlations between whole-brain activation patterns and behavioral performance in a face-matching task tested outside the scanner. Despite poor neural adaptation in right FFA, high-performing older adults engaged the same face-processing network as high-performing young adults across conditions, except the one presenting a same facial identity across different viewpoints. Low-performing older adults used this network to a lesser extent. Additionally, high-performing older adults uniquely recruited a set of areas related to better performance across all conditions, indicating age-specific involvement of this added network. This network did not include the core ventral face-processing areas but involved the left inferior occipital gyrus, frontal, and parietal regions. Although our adaptation results show that the neuronal representations of the core face-preferring areas become less selective with age, our multivariate analysis indicates that older adults utilize a distinct network of regions associated with better face matching performance, suggesting that engaging this network may compensate for deficiencies in ventral face processing regions.
Collapse
Affiliation(s)
- Yunjo Lee
- 1Rotman Research Institute, Baycrest Centre, Toronto, Canada
| | - Cheryl L. Grady
- 1Rotman Research Institute, Baycrest Centre, Toronto, Canada
- 2University of Toronto
| | | | | | - Morris Moscovitch
- 1Rotman Research Institute, Baycrest Centre, Toronto, Canada
- 2University of Toronto
| |
Collapse
|
88
|
Gaspar CM, Rousselet GA, Pernet CR. Reliability of ERP and single-trial analyses. Neuroimage 2011; 58:620-9. [DOI: 10.1016/j.neuroimage.2011.06.052] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2010] [Revised: 06/10/2011] [Accepted: 06/20/2011] [Indexed: 10/18/2022] Open
|
89
|
Kwon M, Legge GE. Spatial-frequency cutoff requirements for pattern recognition in central and peripheral vision. Vision Res 2011; 51:1995-2007. [PMID: 21854800 DOI: 10.1016/j.visres.2011.06.020] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2010] [Revised: 05/30/2011] [Accepted: 06/01/2011] [Indexed: 11/19/2022]
Abstract
It is well known that object recognition requires spatial frequencies exceeding some critical cutoff value. People with central scotomas who rely on peripheral vision have substantial difficulty with reading and face recognition. Deficiencies of pattern recognition in peripheral vision, might result in higher cutoff requirements, and may contribute to the functional problems of people with central-field loss. Here we asked about differences in spatial-cutoff requirements in central and peripheral vision for letter and face recognition. The stimuli were the 26 letters of the English alphabet and 26 celebrity faces. Each image was blurred using a low-pass filter in the spatial frequency domain. Critical cutoffs (defined as the minimum low-pass filter cutoff yielding 80% accuracy) were obtained by measuring recognition accuracy as a function of cutoff frequency (in cycles per object). Our data showed that critical cutoffs increased from central to peripheral vision by 20% for letter recognition and by 50% for face recognition. We asked whether these differences could be accounted for by central/peripheral differences in the contrast sensitivity function (CSF). We addressed this question by implementing an ideal-observer model which incorporates empirical CSF measurements and tested the model on letter and face recognition. The success of the model indicates that central/peripheral differences in the cutoff requirements for letter and face recognition can be accounted for by the information content of the stimulus limited by the shape of the human CSF, combined with a source of internal noise and followed by an optimal decision rule.
Collapse
Affiliation(s)
- Miyoung Kwon
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Rd., Minneapolis, MN 55455, USA.
| | | |
Collapse
|
90
|
|
91
|
Rousselet GA, Gaspar CM, Wieczorek KP, Pernet CR. Modeling Single-Trial ERP Reveals Modulation of Bottom-Up Face Visual Processing by Top-Down Task Constraints (in Some Subjects). Front Psychol 2011; 2:137. [PMID: 21886627 PMCID: PMC3153882 DOI: 10.3389/fpsyg.2011.00137] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2011] [Accepted: 06/09/2011] [Indexed: 11/13/2022] Open
Abstract
We studied how task constraints modulate the relationship between single-trial event-related potentials (ERPs) and image noise. Thirteen subjects performed two interleaved tasks: on different blocks, they saw the same stimuli, but they discriminated either between two faces or between two colors. Stimuli were two pictures of red or green faces that contained from 10 to 80% of phase noise, with 10% increments. Behavioral accuracy followed a noise dependent sigmoid in the identity task but was high and independent of noise level in the color task. EEG data recorded concurrently were analyzed using a single-trial ANCOVA: we assessed how changes in task constraints modulated ERP noise sensitivity while regressing out the main ERP differences due to identity, color, and task. Single-trial ERP sensitivity to image phase noise started at about 95-110 ms post-stimulus onset. Group analyses showed a significant reduction in noise sensitivity in the color task compared to the identity task from about 140 ms to 300 ms post-stimulus onset. However, statistical analyses in every subject revealed different results: significant task modulation occurred in 8/13 subjects, one showing an increase and seven showing a decrease in noise sensitivity in the color task. Onsets and durations of effects also differed between group and single-trial analyses: at any time point only a maximum of four subjects (31%) showed results consistent with group analyses. We provide detailed results for all 13 subjects, including a shift function analysis that revealed asymmetric task modulations of single-trial ERP distributions. We conclude that, during face processing, bottom-up sensitivity to phase noise can be modulated by top-down task constraints, in a broad window around the P2, at least in some subjects.
Collapse
Affiliation(s)
- Guillaume A. Rousselet
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of GlasgowGlasgow, UK
| | - Carl M. Gaspar
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of GlasgowGlasgow, UK
| | - Kacper P. Wieczorek
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of GlasgowGlasgow, UK
| | - Cyril R. Pernet
- Brain Research Imaging Centre, SINAPSE Collaboration, University of EdinburghEdinburgh, UK
| |
Collapse
|
92
|
Abstract
Humans extract visual information from the world through spatial frequency (SF) channels that are sensitive to different scales of light-dark fluctuations across visual space. Using two methods, we measured human SF tuning for discriminating videos of human actions (walking, running, skipping and jumping). The first, more traditional, approach measured signal-to-noise ratio (s/n) thresholds for videos filtered by one of six Gaussian band-pass filters ranging from 4 to 128 cycles/image. The second approach used SF “bubbles”, Willenbockel et al. (Journal of Experimental Psychology. Human Perception and Performance, 36(1), 122–135, 2010), which randomly filters the entire SF domain on each trial and uses reverse correlation to estimate SF tuning. Results from both methods were consistent and revealed a diagnostic SF band centered between 12-16 cycles/image (about 1-1.25 cycles/body width). Efficiency on this task was estimated by comparing s/n thresholds for humans to an ideal observer, and was estimated to be quite low (>.04%) for both experiments.
Collapse
|
93
|
Abstract
PURPOSE Difficulty identifying faces is a common complaint of people with central vision loss. Dakin and Watt (2009) reported that the horizontal components of face images are most informative for face identification in normal vision. In this study, we examined whether people with central vision loss similarly rely primarily on the horizontal components of face images for face identification. METHODS Seven observers with central vision loss (mean age = 69 ± 9 [SD]) and five age-matched observers with normal vision (mean age = 65 ± 6) participated in this study. We measured observers' accuracy for reporting the identity of face images spatially filtered using an orientation filter with center orientation ranging from 0 (horizontal) to 150° in steps of 30°, with a bandwidth of 23°. Face images without filtering were also tested. RESULTS For all observers, accuracy for identifying filtered face images was highest around the horizontal orientation, dropping systematically as the filter orientation deviated from horizontal, and was the lowest at the vertical orientation. Compared with control observers, observers with central vision loss showed (1) a larger difference in accuracy between identifying filtered (at peak performance) and unfiltered face images; (2) a reduced accuracy at peak performance; and (3) a smaller difference in performance for identifying filtered images between the horizontal and the vertical filter orientations. CONCLUSIONS Spatial information around the horizontal orientation in face images is the most important for face identification, for people with normal vision and central vision loss alike. While the horizontal information alone can support reasonably good performance for identifying faces in people with normal vision, people with central vision loss seem to also rely on information along other orientations.
Collapse
Affiliation(s)
- Deyue Yu
- School of Optometry, University of California, Berkeley, Berkeley, California, USA.
| | | |
Collapse
|
94
|
McCulloch DL, Loffler G, Colquhoun K, Bruce N, Dutton GN, Bach M. The effects of visual degradation on face discrimination. Ophthalmic Physiol Opt 2011; 31:240-8. [PMID: 21410744 DOI: 10.1111/j.1475-1313.2011.00828.x] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
PURPOSE People with reduced visual acuity (VA) and/or contrast sensitivity have difficulty recognizing faces and facial expressions. We have quantified these difficulties, using a synthetic face discrimination task employing both normal and artificially degraded vision. METHODS VA and contrast thresholds were measured using an optimised staircase procedure [Freiburg acuity Test (FrACT)] in 25 young adults (aged 18-24 years) with corrected visual acuity of 0.0 logMAR or better and with four levels of vision degraded with Bangerter occlusion foils. For face discrimination, male face images were synthesised from 37 cardinal points (position of eyes, width of nose, head shape etc) derived from frontal face photographs and manipulated by altering the points as a fraction of the mean head radius. Face discrimination thresholds (% difference) were measured from a simultaneous four-alternative forced choice of 'odd one out' from three identical faces and one that differed. Psychometric functions were measured for four participants with normal and degraded vision. Subsequently, the difference between faces was fixed at twice the discrimination thresholds and the size of the faces manipulated using the FrACT threshold procedure in 25 participants. Data were converted to equivalent face discrimination distances for realistic face dimensions. RESULTS With normal vision, face discrimination thresholds ranged from 2.7% to 5.6%; these increased systematically and were more variable with visual degradation. When manipulating face size, face discrimination distance was highly correlated with both acuity and contrast sensitivity (r(2) = 0.77 and 0.80 respectively, p < 0.01). The mean distance with normal vision was 15.3 m (14.5-16.2 ± S.E.M.). With vision degraded to 0.6 logMAR (6/24 Snellen, contrast threshold 15%) the mean face discrimination distance was reduced to 3.9 m (3.7-4.1, ±S.E.M.). CONCLUSIONS Poor face discrimination has a profound impact on real-life social communication. Here we report that artificial visual degradation also adversely impacts a synthetic face recognition task. As a rule of thumb, reduction in VA of 0.3 logMAR (halving the decimal VA) reduces the face recognition distance by a factor of 0.6 times. The FrACT-based face discrimination task provides an efficient new tool to quantify and monitor face discrimination ability.
Collapse
Affiliation(s)
- Daphne L McCulloch
- Department of Vision Sciences, Glasgow Caledonian University, Glasgow, UK.
| | | | | | | | | | | |
Collapse
|
95
|
Gao X, Maurer D. A comparison of spatial frequency tuning for the recognition of facial identity and facial expressions in adults and children. Vision Res 2011; 51:508-19. [PMID: 21277319 DOI: 10.1016/j.visres.2011.01.011] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2010] [Revised: 01/23/2011] [Accepted: 01/24/2011] [Indexed: 11/17/2022]
Abstract
We measured contrast thresholds for the identification of faces and facial expressions as a function of the center spatial frequency of narrow-band additive noise. In adults, masking of mid spatial frequencies (11-16c/fw) caused the largest elevation in contrast threshold (Experiment 1). Ideal observer analysis revealed that adults were equally sensitive to available information at low and mid spatial frequencies, both of which they used more efficiently than high spatial frequencies. The drop-off of sensitivity at high spatial frequencies began at a lower spatial frequency for recognizing facial identity than for recognizing facial expression. As a result, the critical band was higher for expression than for identity. The critical band for both identity and expression shifted to slightly lower values as distance increased (Experiment 2), a pattern indicating only partial scale invariance. Children aged 10 and 14 years showed similar tuning but needed more contrast (Experiment 3). The patterns suggest that adults use finer details for recognizing facial expressions than for identifying faces, a tuning that appears as early as age 10.
Collapse
|
96
|
|
97
|
Peterson MF, Das K, Sy JL, Li S, Giesbrecht B, Kourtzi Z, Eckstein MP. Ideal observer analysis for task normalization of pattern classifier performance applied to EEG and fMRI data. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2010; 27:2670-2683. [PMID: 21119752 DOI: 10.1364/josaa.27.002670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The application of multivariate techniques to neuroimaging and electrophysiological data has greatly enhanced the ability to detect where, when, and how functional neural information is processed during a variety of behavioral tasks. With the extension to single-trial analysis, neuroscientists are able to relate brain states to perceptual, cognitive, and motor processes. Using pattern classification methods, the neuroscientist can extract neural performance measures in a manner analogous to human behavioral performance, allowing for a consistent information content metric across measurement modalities. However, as with behavioral psychophysical performance, pattern classifier performances are a product of both the task-relevant information inherent in the brain and in the task/stimuli. Here, we argue for the use of an ideal observer framework with which the researcher can effectively normalize the observed neural performance given the task's inherent objective difficulty. We use data from a face versus car discrimination task and compare classifier performance applied to electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data with corresponding human behavior through the absolute and relative efficiency metrics. We show that confounding variables that can lead to erroneous interpretations of information content can be accounted for through comparisons to an ideal observer, allowing for more confident interpretation of the neural mechanisms involved in the task of interest. Finally, we discuss limitations of interpretation due to the transduction of indirect measures of neural activity, underlying assumptions in the optimality of the pattern classifiers, and dependence of efficiency results on signal contrast.
Collapse
Affiliation(s)
- Matthew F Peterson
- Department of Psychology, University of California, Santa Barbara, California 93106, USA.
| | | | | | | | | | | | | |
Collapse
|
98
|
Wilson HR, Mei M, Habak C, Wilkinson F. Visual bandwidths for face orientation increase during healthy aging. Vision Res 2010; 51:160-4. [PMID: 21074549 DOI: 10.1016/j.visres.2010.10.026] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2010] [Revised: 10/27/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
Abstract
Perception of visual motion declines during healthy aging, and evidence suggests that this reflects decreases in cortical GABA inhibition that increase neural noise and motion bandwidths. This is supported by neurophysiological data on motion perception in senescent monkeys. Much less is known about deficits in higher level form vision. For example, face perception of frontal views remains relatively constant from adolescence through age 70 with a modest decline thereafter. However, we have shown recently that the elderly have a specific deficit in face matching when a transformation must be made between frontal and left or right side views. Here we use face view adaptation to demonstrate that this deficit results from significant broadening of cortical bandwidths for face orientation along with increased internal noise. A neural model shows that these bandwidths increase by a factor of 1.74 between age 26 and age 67 years. This is similar to the increase reported for motion bandwidths in senescent monkeys. Furthermore, the neural model demonstrates that head orientation bandwidth increases can arise from decreased cortical inhibition. Thus, high levels of form vision degrade in parallel with higher levels of motion perception and likely result from similar causes.
Collapse
Affiliation(s)
- Hugh R Wilson
- Centre for Vision Research, York University, Toronto, Canada.
| | | | | | | |
Collapse
|
99
|
Therrien ME, Collin CA. Spatial vision meets spatial cognition: examining the effect of visual blur on human visually guided route learning. Perception 2010; 39:1043-64. [PMID: 20942357 DOI: 10.1068/p5991] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Visual navigation is a task that involves processing two-dimensional light patterns on the retinas to obtain knowledge of how to move through a three-dimensional environment. Therefore, modifying the basic characteristics of the two-dimensional information provided to navigators should have important and informative effects on how they navigate. Despite this, few basic research studies have examined the effects of systematically modifying the available levels of spatial visual detail on navigation performance. In this study, we tested the effects of a range of visual blur levels--approximately equivalent to various degrees of low-pass spatial frequency filtering--on participants' visually guided route-learning performance using desktop virtual renderings of the Hebb-Williams mazes. Our findings show that the function of blur and time to finish the mazes follows a sigmoidal pattern, with the inflection point around +2 D of experienced defocus. This suggests that visually guided route learning is fairly robust to blur, with the threshold level being just above the limit for legal blindness. These findings have implications for models of route learning, as well as for practical situations in which humans must navigate under conditions of blur.
Collapse
Affiliation(s)
- Megan E Therrien
- School of Psychology, University of Ottawa, 125 University Private, Room MNT 415A, Ottawa, Canada
| | | |
Collapse
|
100
|
Betts LR, Wilson HR. Heterogeneous Structure in Face-selective Human Occipito-temporal Cortex. J Cogn Neurosci 2010; 22:2276-88. [DOI: 10.1162/jocn.2009.21346] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
It is well established that the human visual system contains a distributed network of regions that are involved in processing faces, but our understanding of how faces are represented within these face-sensitive brain areas is incomplete. We used fMRI to investigate whether face-sensitive brain areas are solely tuned for whole faces, or whether they contain heterogeneous populations of neurons tuned to individual components of the face as well as whole faces, as suggested by physiological investigations in nonhuman primates. The middle fusiform gyrus (fusiform face area, or FFA) and the inferior occipital gyrus (occipital face area, or OFA) produced robust BOLD activation to synthetic whole face stimuli, but also to the internal facial features and head outlines. BOLD responses to whole face stimuli in FFA were significantly reduced after adaptation to whole faces, but not after adaptation to features or head outlines, whereas activation to head outlines was reduced after adaptation to both whole faces and head outlines. OFA showed no significant adaptation effects for matching adaptation and test conditions, but did exhibit cross-adaptation between whole faces and head outlines. The internal face features did not produce any significant adaptation within either FFA or OFA. Our results are consistent with a model in which independent populations of whole face-, feature-, and head outline-tuned neurons exist within face-sensitive regions of human occipito-temporal cortex, which in turn would support tasks such as viewpoint processing, emotion classification, and identity discrimination.
Collapse
|