1
|
Gomes N, Benrós MF, Semin GR. Validation of the open biological negative image set for a Portuguese population: Comparing Japanese and Portuguese samples and an exploration of low-order visual properties of the stimuli. Behav Res Methods 2024; 56:860-880. [PMID: 36882667 PMCID: PMC10830772 DOI: 10.3758/s13428-023-02090-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/11/2023] [Indexed: 03/09/2023]
Abstract
Recently, Shirai and Watanabe Royal Society Open Science, 9(1), 211128 (2022) developed OBNIS (Open Biological Negative Image Set), a comprehensive database containing images (primarily animals but also fruits, mushrooms, and vegetables) that visually elicit disgust, fear, or neither. OBNIS was initially validated for a Japanese population. In this article, we validated the color version of OBNIS for a Portuguese population. In study 1, the methodology of the original article was used. This allowed direct comparisons between the Portuguese and Japanese populations. Aside from a few emotional classification mismatches between disgust, fear, or neither-related images, we found that arousal and valence relate distinctively in both populations. In contrast to the Japanese sample, the Portuguese reported increased arousal for more positive valenced stimuli, suggesting that OBNIS images elicit positive emotions in the Portuguese population. These results showed important cross-cultural differences regarding OBNIS. In study 2, a methodological change was introduced: instead of the three classification options used originally (fear, disgust, or neither), six basic emotions were used (fear, disgust, sadness, surprise, anger, happiness), and a "neither" option, to confirm whether some of the originally "neither-related" images are associated with positive emotions (happiness). Additionally, the low-order visual properties of images (luminosity, contrast, chromatic complexity, and spatial frequency distribution) were explored due to their important role in emotion-related research. A fourth image group associated with happiness was found in the Portuguese sample. Moreover, image groups present differences regarding the low-order visual characteristics, which are correlated with arousal and valence ratings, highlighting the importance of controlling such characteristics in emotion-related research.
Collapse
Affiliation(s)
- Nuno Gomes
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco 34, 1149-041, Lisbon, Portugal.
| | - Miguel F Benrós
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco 34, 1149-041, Lisbon, Portugal
| | - Gün R Semin
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco 34, 1149-041, Lisbon, Portugal
- Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Lieber JD, Lee GM, Majaj NJ, Movshon JA. Sensitivity to naturalistic texture relies primarily on high spatial frequencies. J Vis 2023; 23:4. [PMID: 36745452 PMCID: PMC9910384 DOI: 10.1167/jov.23.2.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/19/2022] [Indexed: 02/07/2023] Open
Abstract
Natural images contain information at multiple spatial scales. Though we understand how early visual mechanisms split multiscale images into distinct spatial frequency channels, we do not know how the outputs of these channels are processed further by mid-level visual mechanisms. We have recently developed a texture discrimination task that uses synthetic, multi-scale, "naturalistic" textures to isolate these mid-level mechanisms. Here, we use three experimental manipulations (image blur, image rescaling, and eccentric viewing) to show that perceptual sensitivity to naturalistic structure is strongly dependent on features at high object spatial frequencies (measured in cycles/image). As a result, sensitivity depends on a texture acuity limit, a property of the visual system that sets the highest retinal spatial frequency (measured in cycles/degree) at which observers can detect naturalistic features. Analysis of the texture images using a model observer analysis shows that naturalistic image features at high object spatial frequencies carry more task-relevant information than those at low object spatial frequencies. That is, the dependence of sensitivity on high object spatial frequencies is a property of the texture images, rather than a property of the visual system. Accordingly, we find human observers' ability to extract naturalistic information (their efficiency) is similar for all object spatial frequencies. We conclude that the mid-level mechanisms that underlie perceptual sensitivity effectively extract information from all image features below the texture acuity limit, regardless of their retinal and object spatial frequency.
Collapse
Affiliation(s)
- Justin D Lieber
- Center for Neural Science, New York University, New York, NY, USA
| | - Gerick M Lee
- Center for Neural Science, New York University, New York, NY, USA
| | - Najib J Majaj
- Center for Neural Science, New York University, New York, NY, USA
| | | |
Collapse
|
3
|
Features and Extra-Striate Body Area Representations of Diagnostic Body Parts in Anger and Fear Perception. Brain Sci 2022; 12:brainsci12040466. [PMID: 35447997 PMCID: PMC9028525 DOI: 10.3390/brainsci12040466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 03/19/2022] [Accepted: 03/29/2022] [Indexed: 02/04/2023] Open
Abstract
Social species perceive emotion via extracting diagnostic features of body movements. Although extensive studies have contributed to knowledge on how the entire body is used as context for decoding bodily expression, we know little about whether specific body parts (e.g., arms and legs) transmit enough information for body understanding. In this study, we performed behavioral experiments using the Bubbles paradigm on static body images to directly explore diagnostic body parts for categorizing angry, fearful and neutral expressions. Results showed that subjects recognized emotional bodies through diagnostic features from the torso with arms. We then conducted a follow-up functional magnetic resonance imaging (fMRI) experiment on body part images to examine whether diagnostic parts modulated body-related brain activity and corresponding neural representations. We found greater activations of the extra-striate body area (EBA) in response to both anger and fear than neutral for the torso and arms. Representational similarity analysis showed that neural patterns of the EBA distinguished different bodily expressions. Furthermore, the torso with arms and whole body had higher similarities in EBA representations relative to the legs and whole body, and to the head and whole body. Taken together, these results indicate that diagnostic body parts (i.e., torso with arms) can communicate bodily expression in a detectable manner.
Collapse
|
4
|
Charbonneau I, Guérette J, Cormier S, Blais C, Lalonde-Beaudoin G, Smith FW, Fiset D. The role of spatial frequencies for facial pain categorization. Sci Rep 2021; 11:14357. [PMID: 34257357 PMCID: PMC8277883 DOI: 10.1038/s41598-021-93776-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 06/25/2021] [Indexed: 11/16/2022] Open
Abstract
Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2-4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.
Collapse
Affiliation(s)
- Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Joël Guérette
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Guillaume Lalonde-Beaudoin
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Fraser W Smith
- University of East Anglia School of Psychology, Norwich, NR4 7TJ, UK
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada.
| |
Collapse
|
5
|
Flexible time course of spatial frequency use during scene categorization. Sci Rep 2021; 11:14079. [PMID: 34234183 PMCID: PMC8263560 DOI: 10.1038/s41598-021-93252-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 06/22/2021] [Indexed: 11/09/2022] Open
Abstract
Human observers can quickly and accurately categorize scenes. This remarkable ability is related to the usage of information at different spatial frequencies (SFs) following a coarse-to-fine pattern: Low SFs, conveying coarse layout information, are thought to be used earlier than high SFs, representing more fine-grained information. Alternatives to this pattern have rarely been considered. Here, we probed all possible SF usage strategies randomly with high resolution in both the SF and time dimensions at two categorization levels. We show that correct basic-level categorizations of indoor scenes are linked to the sampling of relatively high SFs, whereas correct outdoor scene categorizations are predicted by an early use of high SFs and a later use of low SFs (fine-to-coarse pattern of SF usage). Superordinate-level categorizations (indoor vs. outdoor scenes) rely on lower SFs early on, followed by a shift to higher SFs and a subsequent shift back to lower SFs in late stages. In summary, our results show no consistent pattern of SF usage across tasks and only partially replicate the diagnostic SFs found in previous studies. We therefore propose that SF sampling strategies of observers differ with varying stimulus and task characteristics, thus favouring the notion of flexible SF usage.
Collapse
|
6
|
Nador JD, Zoia M, Pachai MV, Ramon M. Psychophysical profiles in super-recognizers. Sci Rep 2021; 11:13184. [PMID: 34162959 PMCID: PMC8222339 DOI: 10.1038/s41598-021-92549-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 06/11/2021] [Indexed: 11/25/2022] Open
Abstract
Facial identity matching ability varies widely, ranging from prosopagnosic individuals (who exhibit profound impairments in face cognition/processing) to so-called super-recognizers (SRs), possessing exceptional capacities. Yet, despite the often consequential nature of face matching decisions—such as identity verification in security critical settings—ability assessments tendentially rely on simple performance metrics on a handful of heterogeneously related subprocesses, or in some cases only a single measured subprocess. Unfortunately, methodologies of this ilk leave contributions of stimulus information to observed variations in ability largely un(der)specified. Moreover, they are inadequate for addressing the qualitative or quantitative nature of differences between SRs’ abilities and those of the general population. Here, therefore, we sought to investigate individual differences—among SRs identified using a novel conservative diagnostic framework, and neurotypical controls—by systematically varying retinal availability, bandwidth, and orientation of faces’ spatial frequency content in two face matching experiments. Psychophysical evaluations of these parameters’ contributions to ability reveal that SRs more consistently exploit the same spatial frequency information, rather than suggesting qualitatively different profiles between control observers and SRs. These findings stress the importance of optimizing procedures for SR identification, for example by including measures quantifying the consistency of individuals’ behavior.
Collapse
Affiliation(s)
- Jeffrey D Nador
- Department of Psychology, Applied Face Cognition Lab, University of Fribourg, Rue P.-A. de Faucigny 2, 1700, Fribourg, Switzerland
| | - Matteo Zoia
- Department of Psychology, Applied Face Cognition Lab, University of Fribourg, Rue P.-A. de Faucigny 2, 1700, Fribourg, Switzerland
| | - Matthew V Pachai
- Perceptual Neuroscience Laboratory, York University, Toronto, Canada
| | - Meike Ramon
- Department of Psychology, Applied Face Cognition Lab, University of Fribourg, Rue P.-A. de Faucigny 2, 1700, Fribourg, Switzerland.
| |
Collapse
|
7
|
Krumhuber EG, Hyniewska S, Orlowska A. Contextual effects on smile perception and recognition memory. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01910-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractMost past research has focused on the role played by social context information in emotion classification, such as whether a display is perceived as belonging to one emotion category or another. The current study aims to investigate whether the effect of context extends to the interpretation of emotion displays, i.e. smiles that could be judged either as posed or spontaneous readouts of underlying positive emotion. A between-subjects design (N = 93) was used to investigate the perception and recall of posed smiles, presented together with a happy or polite social context scenario. Results showed that smiles seen in a happy context were judged as more spontaneous than the same smiles presented in a polite context. Also, smiles were misremembered as having more of the physical attributes (i.e., Duchenne marker) associated with spontaneous enjoyment when they appeared in the happy than polite context condition. Together, these findings indicate that social context information is routinely encoded during emotion perception, thereby shaping the interpretation and recognition memory of facial expressions.
Collapse
|
8
|
Blais C, Linnell KJ, Caparos S, Estéphan A. Cultural Differences in Face Recognition and Potential Underlying Mechanisms. Front Psychol 2021; 12:627026. [PMID: 33927668 PMCID: PMC8076495 DOI: 10.3389/fpsyg.2021.627026] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Accepted: 03/15/2021] [Indexed: 12/03/2022] Open
Abstract
The ability to recognize a face is crucial for the success of social interactions. Understanding the visual processes underlying this ability has been the focus of a long tradition of research. Recent advances in the field have revealed that individuals having different cultural backgrounds differ in the type of visual information they use for face processing. However, the mechanisms that underpin these differences remain unknown. Here, we revisit recent findings highlighting group differences in face processing. Then, we integrate these results in a model of visual categorization developed in the field of psychophysics: the RAP framework. On the basis of this framework, we discuss potential mechanisms, whether face-specific or not, that may underlie cross-cultural differences in face perception.
Collapse
Affiliation(s)
- Caroline Blais
- Groupe de Neurosciences Sociales, Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, Canada
| | - Karina J Linnell
- Department of Psychology, Goldsmiths University of London, London, United Kingdom
| | - Serge Caparos
- Laboratoire DysCo, Université Paris 8, Saint-Denis, France.,Institut Universitaire de France, Paris, France
| | - Amanda Estéphan
- Groupe de Neurosciences Sociales, Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, Canada.,Département de psychologie, Université du Québec à Montréal, Montréal, QC, Canada
| |
Collapse
|
9
|
Empirical Insights from a Study on Outlier Preserving Value Generalization in Animated Choropleth Maps. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10040208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Time series animation of choropleth maps easily exceeds our perceptual limits. In this empirical research, we investigate the effect of local outlier preserving value generalization of animated choropleth maps on the ability to detect general trends and local deviations thereof. Comparing generalization in space, in time, and in a combination of both dimensions, value smoothing based on a first order spatial neighborhood facilitated the detection of local outliers best, followed by the spatiotemporal and temporal generalization variants. We did not find any evidence that value generalization helps in detecting global trends.
Collapse
|
10
|
Wang S, Eccleston C, Keogh E. The Time Course of Facial Expression Recognition Using Spatial Frequency Information: Comparing Pain and Core Emotions. THE JOURNAL OF PAIN 2020; 22:196-208. [PMID: 32771561 DOI: 10.1016/j.jpain.2020.07.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Revised: 06/20/2020] [Accepted: 07/12/2020] [Indexed: 10/23/2022]
Abstract
We are able to recognize others' experience of pain from their facial expressions. However, little is known about what makes the recognition of pain possible and whether it is similar or different from core emotions. This study investigated the mechanisms underpinning the recognition of pain expressions, in terms of spatial frequency (SF) information analysis, and compared pain with 2 core emotions (ie, fear and happiness). Two experiments using a backward masking paradigm were conducted to examine the time course of low- and high-SF information processing, by manipulating the presentation duration of face stimuli and target-mask onset asynchrony. Overall, we found a temporal advantage of low-SF over high-SF information for expression recognition, including pain. This asynchrony between low- and high-SF happened at a very early stage of information extraction, which indicates that the decoding of low-SF expression information is not only faster but possibly occurs before the processing of high-SF information. Interestingly, the recognition of pain was also found to be slower and more difficult than core emotions. It is suggested that more complex decoding process may be involved in the successful recognition of pain from facial expressions, possibly due to the multidimensional nature of pain experiences. PERSPECTIVE: Two studies explore the perceptual and temporal properties of the decoding of pain facial expressions. At very early stages of attention, the recognition of pain was found to be more difficult than fear and happiness. It suggests that pain is a complex expression, and requires additional time to detect and process.
Collapse
Affiliation(s)
- Shan Wang
- Centre for Pain Research, University of Bath, Bath, United Kingdom; Department of Psychology, University of Bath, Bath, United Kingdom; Division of Social Sciences, Duke Kunshan University, Kunshan, Jiangsu Province, China.
| | - Christopher Eccleston
- Centre for Pain Research, University of Bath, Bath, United Kingdom; Department of Experimental-Clinical and Health Psychology, Ghent University, Belgium
| | - Edmund Keogh
- Centre for Pain Research, University of Bath, Bath, United Kingdom; Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
11
|
Zhang Q, Li S. The roles of spatial frequency in category‐level visual search of real‐world scenes. Psych J 2019; 9:44-55. [DOI: 10.1002/pchj.294] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 02/20/2019] [Accepted: 04/21/2019] [Indexed: 11/07/2022]
Affiliation(s)
- Qi Zhang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental HealthPeking University Beijing China
- PKU‐IDG/McGovern Institute for Brain ResearchPeking University Beijing China
- Key Laboratory of Machine Perception (Ministry of Education)Peking University Beijing China
| | - Sheng Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental HealthPeking University Beijing China
- PKU‐IDG/McGovern Institute for Brain ResearchPeking University Beijing China
- Key Laboratory of Machine Perception (Ministry of Education)Peking University Beijing China
| |
Collapse
|
12
|
Zhan J, Ince RAA, van Rijsbergen N, Schyns PG. Dynamic Construction of Reduced Representations in the Brain for Perceptual Decision Behavior. Curr Biol 2019; 29:319-326.e4. [PMID: 30639108 PMCID: PMC6345582 DOI: 10.1016/j.cub.2018.11.049] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2018] [Revised: 10/23/2018] [Accepted: 11/20/2018] [Indexed: 01/03/2023]
Abstract
Over the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1-14], where visual categorizations unfold over the first 250 ms of processing [15-19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations-e.g. categorizing the same object as "a car" or "a Porsche." While we partly understand where and when these categorizations happen in the occipito-ventral pathway, the next challenge is to unravel how these categorizations happen. That is, how does high-dimensional input collapse in the occipito-ventral pathway to become low dimensional representations that guide behavior? To address this, we investigated what information the brain processes in a visual perception task and visualized the dynamic representation of this information in brain activity. To do so, we developed stimulus information representation (SIR), an information theoretic framework, to tease apart stimulus information that supports behavior from that which does not. We then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using SIR, we demonstrate that a rapid (∼170 ms) reduction of behaviorally irrelevant information occurs in the occipital cortex and that representations of the information that supports distinct behaviors are constructed in the right fusiform gyrus (rFG). Our results thus highlight how SIR can be used to investigate the component processes of the brain by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm.
Collapse
Affiliation(s)
- Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Nicola van Rijsbergen
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom; School of Psychology, University of Glasgow, 62 Hillhead Street, Glasgow, Scotland G12 8QB, United Kingdom.
| |
Collapse
|
13
|
Schafer A, Rouland JF, Peyrin C, Szaffarczyk S, Boucart M. Glaucoma Affects Viewing Distance for Recognition of Sex and Facial Expression. ACTA ACUST UNITED AC 2018; 59:4921-4928. [DOI: 10.1167/iovs.18-24875] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Affiliation(s)
- Audrey Schafer
- Centre Hospitalier Universitaire de Lille, Hôpital Huriez, Service d'Ophtalmologie, Lille, France
| | - Jean François Rouland
- Centre Hospitalier Universitaire de Lille, Hôpital Huriez, Service d'Ophtalmologie, Lille, France
- SCALab, University of Lille, Centre National de la Recherche Scientifique, Lille, France
| | - Carole Peyrin
- University Grenoble Alpes, CNRS, LPNC, 38000 Grenoble, France
| | - Sebastien Szaffarczyk
- SCALab, University of Lille, Centre National de la Recherche Scientifique, Lille, France
| | - Muriel Boucart
- SCALab, University of Lille, Centre National de la Recherche Scientifique, Lille, France
| |
Collapse
|
14
|
Jeantet C, Caharel S, Schwan R, Lighezzolo-Alnot J, Laprevote V. Factors influencing spatial frequency extraction in faces: A review. Neurosci Biobehav Rev 2018. [DOI: 10.1016/j.neubiorev.2018.03.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
15
|
The role of spatial frequency information in the decoding of facial expressions of pain: a novel hybrid task. Pain 2018; 158:2233-2242. [PMID: 28767508 DOI: 10.1097/j.pain.0000000000001031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Spatial frequency (SF) information contributes to the recognition of facial expressions, including pain. Low-SF encodes facial configuration and structure and often dominates over high-SF information, which encodes fine details in facial features. This low-SF preference has not been investigated within the context of pain. In this study, we investigated whether perpetual preference differences exist for low-SF and high-SF pain information. A novel hybrid expression paradigm was used in which 2 different expressions, one containing low-SF information and the other high-SF information, were combined in a facial hybrid. Participants are instructed to identify the core expression contained within the hybrid, allowing for the measurement of SF information preference. Three experiments were conducted (46 participants in each) that varied the expressions within the hybrid faces: respectively pain-neutral, pain-fear, and pain-happiness. In order to measure the temporal aspects of image processing, each hybrid image was presented for 33, 67, 150, and 300 ms. As expected, identification of pain and other expressions was dominated by low-SF information across the 3 experiments. The low-SF preference was largest when the presentation of hybrid faces was brief and reduced as the presentation duration increased. A sex difference was also found in experiment 1. For women, the low-SF preference was dampened by high-SF pain information, when viewing low-SF neutral expressions. These results not only confirm the role that SF information has in the recognition of pain in facial expressions but suggests that in some situations, there may be sex differences in how pain is communicated.
Collapse
|
16
|
Smith FW, Rossit S. Identifying and detecting facial expressions of emotion in peripheral vision. PLoS One 2018; 13:e0197160. [PMID: 29847562 PMCID: PMC5976168 DOI: 10.1371/journal.pone.0197160] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 04/27/2018] [Indexed: 11/24/2022] Open
Abstract
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.
Collapse
Affiliation(s)
- Fraser W. Smith
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Stephanie Rossit
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|
17
|
Tian J, Wang J, Xia T, Zhao W, Xu Q, He W. The influence of spatial frequency content on facial expression processing: An ERP study using rapid serial visual presentation. Sci Rep 2018; 8:2383. [PMID: 29403062 PMCID: PMC5799249 DOI: 10.1038/s41598-018-20467-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Accepted: 01/16/2018] [Indexed: 11/24/2022] Open
Abstract
Spatial frequency (SF) contents have been shown to play an important role in emotion perception. This study employed event-related potentials (ERPs) to explore the time course of neural dynamics involved in the processing of facial expression conveying specific SF information. Participants completed a dual-target rapid serial visual presentation (RSVP) task, in which SF-filtered happy, fearful, and neutral faces were presented. The face-sensitive N170 component distinguished emotional (happy and fearful) faces from neutral faces in a low spatial frequency (LSF) condition, while only happy faces were distinguished from neutral faces in a high spatial frequency (HSF) condition. The later P3 component differentiated between the three types of emotional faces in both LSF and HSF conditions. Furthermore, LSF information elicited larger P1 amplitudes than did HSF information, while HSF information elicited larger N170 and P3 amplitudes than did LSF information. Taken together, these results suggest that emotion perception is selectively tuned to distinctive SF contents at different temporal processing stages.
Collapse
Affiliation(s)
- Jinhua Tian
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Jian Wang
- School of Public Policy and Management, Anhui Jianzhu University, Hefei, 230061, China
| | - Tao Xia
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Wenshuang Zhao
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Qianru Xu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Weiqi He
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China.
| |
Collapse
|
18
|
Estéphan A, Fiset D, Saumure C, Plouffe-Demers MP, Zhang Y, Sun D, Blais C. Time Course of Cultural Differences in Spatial Frequency Use for Face Identification. Sci Rep 2018; 8:1816. [PMID: 29379032 PMCID: PMC5788938 DOI: 10.1038/s41598-018-19971-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 12/15/2017] [Indexed: 11/13/2022] Open
Abstract
Several previous studies of eye movements have put forward that, during face recognition, Easterners spread their attention across a greater part of their visual field than Westerners. Recently, we found that culture’s effect on the perception of faces reaches mechanisms deeper than eye movements, therefore affecting the very nature of information sampled by the visual system: that is, Westerners globally rely more than Easterners on fine-grained visual information (i.e. high spatial frequencies; SFs), whereas Easterners rely more on coarse-grained visual information (i.e. low SFs). These findings suggest that culture influences basic visual processes; however, the temporal onset and dynamics of these culture-specific perceptual differences are still unknown. Here, we investigate the time course of SF use in Western Caucasian (Canadian) and East Asian (Chinese) observers during a face identification task. Firstly, our results confirm that Easterners use relatively lower SFs than Westerners, while the latter use relatively higher SFs. More importantly, our results indicate that these differences arise as early as 34 ms after stimulus onset, and remain stable through time. Our research supports the hypothesis that Westerners and Easterners initially rely on different types of visual information during face processing.
Collapse
Affiliation(s)
- Amanda Estéphan
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada.,Département de psychologie, Université du Québec à Montréal, Québec, Canada
| | - Daniel Fiset
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada
| | - Camille Saumure
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada
| | | | - Ye Zhang
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Dan Sun
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Caroline Blais
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada.
| |
Collapse
|
19
|
Revina Y, Petro LS, Muckli L. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs. Neuroimage 2017; 180:280-290. [PMID: 28951158 DOI: 10.1016/j.neuroimage.2017.09.047] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 09/01/2017] [Accepted: 09/21/2017] [Indexed: 11/26/2022] Open
Abstract
Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs.
Collapse
Affiliation(s)
- Yulia Revina
- Centre for Cognitive Neuroimaging, Institute of Neuroscience & Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QB, UK.
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience & Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QB, UK.
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience & Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QB, UK.
| |
Collapse
|
20
|
Kauffmann L, Roux-Sibilon A, Beffara B, Mermillod M, Guyader N, Peyrin C. How does information from low and high spatial frequencies interact during scene categorization? VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1347590] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Louise Kauffmann
- Department of Psychology, University of Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
- Neural Mechanisms of Human Communication Research Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Alexia Roux-Sibilon
- Department of Psychology, University of Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| | - Brice Beffara
- Department of Psychology, University of Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| | - Martial Mermillod
- Department of Psychology, University of Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| | - Nathalie Guyader
- Image and Signal Department, University of Grenoble Alpes, GIPSA-lab UMR5216, Grenoble, France
| | - Carole Peyrin
- Department of Psychology, University of Grenoble Alpes, CNRS, LPNC UMR 5105, Grenoble, France
| |
Collapse
|
21
|
A Rapid Subcortical Amygdala Route for Faces Irrespective of Spatial Frequency and Emotion. J Neurosci 2017; 37:3864-3874. [PMID: 28283563 DOI: 10.1523/jneurosci.3525-16.2017] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Revised: 01/29/2017] [Accepted: 03/01/2017] [Indexed: 11/21/2022] Open
Abstract
There is significant controversy over the existence and function of a direct subcortical visual pathway to the amygdala. It is thought that this pathway rapidly transmits low spatial frequency information to the amygdala independently of the cortex, and yet the directionality of this function has never been determined. We used magnetoencephalography to measure neural activity while human participants discriminated the gender of neutral and fearful faces filtered for low or high spatial frequencies. We applied dynamic causal modeling to demonstrate that the most likely underlying neural network consisted of a pulvinar-amygdala connection that was uninfluenced by spatial frequency or emotion, and a cortical-amygdala connection that conveyed high spatial frequencies. Crucially, data-driven neural simulations revealed a clear temporal advantage of the subcortical connection over the cortical connection in influencing amygdala activity. Thus, our findings support the existence of a rapid subcortical pathway that is nonselective in terms of the spatial frequency or emotional content of faces. We propose that that the "coarseness" of the subcortical route may be better reframed as "generalized."SIGNIFICANCE STATEMENT The human amygdala coordinates how we respond to biologically relevant stimuli, such as threat or reward. It has been postulated that the amygdala first receives visual input via a rapid subcortical route that conveys "coarse" information, namely, low spatial frequencies. For the first time, the present paper provides direction-specific evidence from computational modeling that the subcortical route plays a generalized role in visual processing by rapidly transmitting raw, unfiltered information directly to the amygdala. This calls into question a widely held assumption across human and animal research that fear responses are produced faster by low spatial frequencies. Our proposed mechanism suggests organisms quickly generate fear responses to a wide range of visual properties, heavily implicating future research on anxiety-prevention strategies.
Collapse
|
22
|
Wang X, Wang S, Fan Y, Huang D, Zhang Y. Speech-specific categorical perception deficit in autism: An Event-Related Potential study of lexical tone processing in Mandarin-speaking children. Sci Rep 2017; 7:43254. [PMID: 28225070 PMCID: PMC5320551 DOI: 10.1038/srep43254] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Accepted: 01/20/2017] [Indexed: 01/14/2023] Open
Abstract
Recent studies reveal that tonal language speakers with autism have enhanced neural sensitivity to pitch changes in nonspeech stimuli but not to lexical tone contrasts in their native language. The present ERP study investigated whether the distinct pitch processing pattern for speech and nonspeech stimuli in autism was due to a speech-specific deficit in categorical perception of lexical tones. A passive oddball paradigm was adopted to examine two groups (16 in the autism group and 15 in the control group) of Chinese children’s Mismatch Responses (MMRs) to equivalent pitch deviations representing within-category and between-category differences in speech and nonspeech contexts. To further examine group-level differences in the MMRs to categorical perception of speech/nonspeech stimuli or lack thereof, neural oscillatory activities at the single trial level were further calculated with the inter-trial phase coherence (ITPC) measure for the theta and beta frequency bands. The MMR and ITPC data from the children with autism showed evidence for lack of categorical perception in the lexical tone condition. In view of the important role of lexical tones in acquiring a tonal language, the results point to the necessity of early intervention for the individuals with autism who show such a speech-specific categorical perception deficit.
Collapse
Affiliation(s)
- Xiaoyue Wang
- School of Psychology, South China Normal University, Guangzhou, 510631, China
| | - Suiping Wang
- School of Psychology, South China Normal University, Guangzhou, 510631, China.,Center for Studies of Psychological Application, South China Normal University, 510631, China.,Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Yuebo Fan
- Guangzhou Rehabilitation and Research Center for Children with Autism, Guangzhou Cana School, Guangzhou, 510540, China
| | - Dan Huang
- Guangzhou Rehabilitation and Research Center for Children with Autism, Guangzhou Cana School, Guangzhou, 510540, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Science, University of Minnesota, Minneapolis, MN, 55455, USA.,Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN, 55455, USA
| |
Collapse
|
23
|
Craddock M, Oppermann F, Müller MM, Martinovic J. Modulation of microsaccades by spatial frequency during object categorization. Vision Res 2016; 130:48-56. [PMID: 27876511 DOI: 10.1016/j.visres.2016.10.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2015] [Revised: 10/20/2016] [Accepted: 10/31/2016] [Indexed: 11/16/2022]
Abstract
The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions.
Collapse
Affiliation(s)
- Matt Craddock
- Institute of Psychology, University of Leipzig, Germany; School of Psychology, University of Leeds, UK
| | - Frank Oppermann
- Institute of Psychology, University of Leipzig, Germany; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Netherlands
| | | | | |
Collapse
|
24
|
Combining S-cone and luminance signals adversely affects discrimination of objects within backgrounds. Sci Rep 2016; 6:20504. [PMID: 26856308 PMCID: PMC4746639 DOI: 10.1038/srep20504] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 01/05/2016] [Indexed: 11/09/2022] Open
Abstract
The visual system processes objects embedded in complex scenes that vary in both luminance and colour. In such scenes, colour contributes to the segmentation of objects from backgrounds, but does it also affect perceptual organisation of object contours which are already defined by luminance signals, or are these processes unaffected by colour's presence? We investigated if luminance and chromatic signals comparably sustain processing of objects embedded in backgrounds, by varying contrast along the luminance dimension and along the two cone-opponent colour directions. In the first experiment thresholds for object/non-object discrimination of Gaborised shapes were obtained in the presence and absence of background clutter. Contrast of the component Gabors was modulated along single colour/luminance dimensions or co-modulated along multiple dimensions simultaneously. Background clutter elevated discrimination thresholds only for combined S-(L + M) and L + M signals. The second experiment replicated and extended this finding by demonstrating that the effect was dependent on the presence of relatively high S-(L + M) contrast. These results indicate that S-(L + M) signals impair spatial vision when combined with luminance. Since S-(L + M) signals are characterised by relatively large receptive fields, this is likely to be due to an increase in the size of the integration field over which contour-defining information is summed.
Collapse
|
25
|
Pascalis O, Kelly DJ. The Origins of Face Processing in Humans: Phylogeny and Ontogeny. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2015; 4:200-9. [PMID: 26158945 DOI: 10.1111/j.1745-6924.2009.01119.x] [Citation(s) in RCA: 84] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Faces are crucial for nonverbal communication in humans and related species. From the first moments of life, newborn infants prefer to look at human faces over almost any other form of stimuli. Since this finding was first observed, there has been much debate regarding the "special" nature of face processing. Researchers have put forward numerous developmental models that attempt to account for this early preference and subsequent maturation of the face processing system. In this article, we review these models and their supporting evidence drawing on literature from developmental, evolutionary, and comparative psychology. We conclude that converging data from these fields strongly suggests that face processing is conducted by a dedicated and complex neural system, is not human specific, and is unlikely to have emerged recently in evolutionary history.
Collapse
|
26
|
Ramon M. Perception of global facial geometry is modulated through experience. PeerJ 2015; 3:e850. [PMID: 25825678 PMCID: PMC4375970 DOI: 10.7717/peerj.850] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Accepted: 03/03/2015] [Indexed: 12/05/2022] Open
Abstract
Identification of personally familiar faces is highly efficient across various viewing conditions. While the presence of robust facial representations stored in memory is considered to aid this process, the mechanisms underlying invariant identification remain unclear. Two experiments tested the hypothesis that facial representations stored in memory are associated with differential perceptual processing of the overall facial geometry. Subjects who were personally familiar or unfamiliar with the identities presented discriminated between stimuli whose overall facial geometry had been manipulated to maintain or alter the original facial configuration (see Barton, Zhao & Keenan, 2003). The results demonstrate that familiarity gives rise to more efficient processing of global facial geometry, and are interpreted in terms of increased holistic processing of facial information that is maintained across viewing distances.
Collapse
Affiliation(s)
- Meike Ramon
- Institute of Research in Psychology, Institute of Neuroscience, Université catholique de Louvain , Louvain-La-Neuve , Belgium
| |
Collapse
|
27
|
Socially anxious individuals discriminate better between angry and neutral faces, particularly when using low spatial frequency information. J Behav Ther Exp Psychiatry 2015; 46:44-9. [PMID: 25208930 DOI: 10.1016/j.jbtep.2014.06.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2012] [Revised: 06/17/2014] [Accepted: 06/27/2014] [Indexed: 11/23/2022]
Abstract
BACKGROUND AND OBJECTIVES Social anxiety is associated with biased processing of threatening faces. Earlier research indicated that socially anxious individuals are biased towards processing low spatial frequency (LSF) information when judging facial expressions. However, it remains unclear whether this bias reflects better performance for LSF-information, worse performance for high spatial frequency (HSF) information that needs to be compensated for, or both. METHODS To answer this question, we used frequency-filtered neutral and angry face stimuli in a speeded classification task to compare the performance of socially anxious and non-anxious individuals for different spatial frequency bands. RESULTS Across all spatial frequency bands, socially anxious individuals were faster in judging facial expressions. Importantly, this performance advantage was larger for LSF-filtered stimuli and most pronounced for those stimuli with the lowest frequency band. Analyzing inverse efficiency scores showed the same pattern, ruling out speed-accuracy trade-off differences between groups. LIMITATIONS The study uses rather artificial (bandpass-filtered) stimuli and is limited towards contrasting the discrimination of neutral and angry faces. Further, only participants with subclinical anxiety were part of the study, so clinical relevance remains to be shown. CONCLUSIONS Our results show that social anxiety is not characterized by deficits in judging emotions from HSF-information, but by advantages when processing LSF-information.
Collapse
|
28
|
Flevaris AV, Martínez A, Hillyard SA. Attending to global versus local stimulus features modulates neural processing of low versus high spatial frequencies: an analysis with event-related brain potentials. Front Psychol 2014; 5:277. [PMID: 24782792 PMCID: PMC3988377 DOI: 10.3389/fpsyg.2014.00277] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/14/2014] [Indexed: 11/17/2022] Open
Abstract
Spatial frequency (SF) selection has long been recognized to play a role in global and local processing, though the nature of the relationship between SF processing and global/local perception is debated. Previous studies have shown that attention to relatively lower SFs facilitates global perception, and that attention to relatively higher SFs facilitates local perception. Here we recorded event-related brain potentials (ERPs) to investigate whether processing of low versus high SFs is modulated automatically during global and local perception, and to examine the time course of any such effects. Participants compared bilaterally presented hierarchical letter stimuli and attended to either the global or local levels. Irrelevant SF grating probes flashed at the center of the display 200 ms after the onset of the hierarchical letter stimuli could either be low or high in SF. It was found that ERPs elicited by the SF grating probes differed as a function of attended level (global versus local). ERPs elicited by low SF grating probes were more positive in the interval 196–236 ms during global than local attention, and this difference was greater over the right occipital scalp. In contrast, ERPs elicited by the high SF gratings were more positive in the interval 250–290 ms during local than global attention, and this difference was bilaterally distributed over the occipital scalp. These results indicate that directing attention to global versus local levels of a hierarchical display facilitates automatic perceptual processing of low versus high SFs, respectively, and this facilitation is not limited to the locations occupied by the hierarchical display. The relatively long latency of these attention-related ERP modulations suggests that initial (early) SF processing is not affected by attention to hierarchical level, lending support to theories positing a higher level mechanism to underlie the relationship between SF processing and global versus local perception.
Collapse
Affiliation(s)
| | - Antigona Martínez
- Department of Neurosciences, University of California San Diego, CA, USA ; Schizophrenia Research Division, Nathan Kline Institute for Psychiatric Research New York, NY, USA
| | - Steven A Hillyard
- Department of Neurosciences, University of California San Diego, CA, USA
| |
Collapse
|
29
|
Auditory rhythms are systemically associated with spatial-frequency and density information in visual scenes. Psychon Bull Rev 2014; 20:740-6. [PMID: 23423817 DOI: 10.3758/s13423-013-0399-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A variety of perceptual correspondences between auditory and visual features have been reported, but few studies have investigated how rhythm, an auditory feature defined purely by dynamics relevant to speech and music, interacts with visual features. Here, we demonstrate a novel crossmodal association between auditory rhythm and visual clutter. Participants were shown a variety of visual scenes from diverse categories and asked to report the auditory rhythm that perceptually matched each scene by adjusting the rate of amplitude modulation (AM) of a sound. Participants matched each scene to a specific AM rate with surprising consistency. A spatial-frequency analysis showed that scenes with greater contrast energy in midrange spatial frequencies were matched to faster AM rates. Bandpass-filtering the scenes indicated that greater contrast energy in this spatial-frequency range was associated with an abundance of object boundaries and contours, suggesting that participants matched more cluttered scenes to faster AM rates. Consistent with this hypothesis, AM-rate matches were strongly correlated with perceived clutter. Additional results indicated that both AM-rate matches and perceived clutter depend on object-based (cycles per object) rather than retinal (cycles per degree of visual angle) spatial frequency. Taken together, these results suggest a systematic crossmodal association between auditory rhythm, representing density in the temporal domain, and visual clutter, representing object-based density in the spatial domain. This association may allow for the use of auditory rhythm to influence how visual clutter is perceived and attended.
Collapse
|
30
|
Craddock M, Martinovic J, Müller MM. Task and spatial frequency modulations of object processing: an EEG study. PLoS One 2013; 8:e70293. [PMID: 23936181 PMCID: PMC3729457 DOI: 10.1371/journal.pone.0070293] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2012] [Accepted: 06/19/2013] [Indexed: 11/19/2022] Open
Abstract
Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors.
Collapse
Affiliation(s)
- Matt Craddock
- Institute of Psychology, University of Leipzig, Germany.
| | | | | |
Collapse
|
31
|
Comfort WE, Wang M, Benton CP, Zana Y. Processing of fear and anger facial expressions: the role of spatial frequency. Front Psychol 2013; 4:213. [PMID: 23637687 PMCID: PMC3636464 DOI: 10.3389/fpsyg.2013.00213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2012] [Accepted: 04/07/2013] [Indexed: 11/13/2022] Open
Abstract
Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12-28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum.
Collapse
Affiliation(s)
- William E Comfort
- Centro de Matemática, Computação e Cognição, Universidade Federal do ABC Santo André, Brazil
| | | | | | | |
Collapse
|
32
|
Peters JC, Vlamings P, Kemner C. Neural processing of high and low spatial frequency information in faces changes across development: qualitative changes in face processing during adolescence. Eur J Neurosci 2013; 37:1448-57. [DOI: 10.1111/ejn.12172] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2012] [Revised: 01/15/2013] [Accepted: 01/28/2013] [Indexed: 11/26/2022]
Affiliation(s)
| | - Petra Vlamings
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience; Maastricht University; Maastricht; The Netherlands
| | | |
Collapse
|
33
|
How does the hippocampal formation mediate memory for stimuli processed by the magnocellular and parvocellular visual pathways? Evidence from the comparison of schizophrenia and amnestic mild cognitive impairment (aMCI). Neuropsychologia 2012; 50:3193-9. [DOI: 10.1016/j.neuropsychologia.2012.10.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2012] [Revised: 09/22/2012] [Accepted: 10/11/2012] [Indexed: 11/18/2022]
|
34
|
Willenbockel V, Lepore F, Nguyen DK, Bouthillier A, Gosselin F. Spatial Frequency Tuning during the Conscious and Non-Conscious Perception of Emotional Facial Expressions - An Intracranial ERP Study. Front Psychol 2012; 3:237. [PMID: 23055988 PMCID: PMC3458489 DOI: 10.3389/fpsyg.2012.00237] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2012] [Accepted: 06/22/2012] [Indexed: 11/16/2022] Open
Abstract
Previous studies have shown that complex visual stimuli, such as emotional facial expressions, can influence brain activity independently of the observers’ awareness. Little is known yet, however, about the “informational correlates” of consciousness – i.e., which low-level information correlates with brain activation during conscious vs. non-conscious perception. Here, we investigated this question in the spatial frequency (SF) domain. We examined which SFs in disgusted and fearful faces modulate activation in the insula and amygdala over time and as a function of awareness, using a combination of intracranial event-related potentials (ERPs), SF Bubbles (Willenbockel et al., 2010a), and Continuous Flash Suppression (CFS; Tsuchiya and Koch, 2005). Patients implanted with electrodes for epilepsy monitoring viewed face photographs (13° × 7°) that were randomly SF filtered on a trial-by-trial basis. In the conscious condition, the faces were visible; in the non-conscious condition, they were rendered invisible using CFS. The data were analyzed by performing multiple linear regressions on the SF filters from each trial and the transformed ERP amplitudes across time. The resulting classification images suggest that many SFs are involved in the conscious and non-conscious perception of emotional expressions, with SFs between 6 and 10 cycles per face width being particularly important early on. The results also revealed qualitative differences between the awareness conditions for both regions. Non-conscious processing relied on low SFs more and was faster than conscious processing. Overall, our findings are consistent with the idea that different pathways are employed for the processing of emotional stimuli under different degrees of awareness. The present study represents a first step to mapping how SF information “flows” through the emotion-processing network with a high temporal resolution and to shedding light on the informational correlates of consciousness in general.
Collapse
Affiliation(s)
- Verena Willenbockel
- Centre de Recherche en Neuropsychologie et Cognition, Département de Psychologie, Université de Montréal Montréal, QC, Canada
| | | | | | | | | |
Collapse
|
35
|
Casey MC, Sowden PT. Modeling learned categorical perception in human vision. Neural Netw 2012; 33:114-26. [PMID: 22622262 DOI: 10.1016/j.neunet.2012.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 05/01/2012] [Accepted: 05/01/2012] [Indexed: 11/15/2022]
Abstract
A long standing debate in cognitive neuroscience has been the extent to which perceptual processing is influenced by prior knowledge and experience with a task. A converging body of evidence now supports the view that a task does influence perceptual processing, leaving us with the challenge of understanding the locus of, and mechanisms underpinning, these influences. An exemplar of this influence is learned categorical perception (CP), in which there is superior perceptual discrimination of stimuli that are placed in different categories. Psychophysical experiments on humans have attempted to determine whether early cortical stages of visual analysis change as a result of learning a categorization task. However, while some results indicate that changes in visual analysis occur, the extent to which earlier stages of processing are changed is still unclear. To explore this issue, we develop a biologically motivated neural model of hierarchical vision processes consisting of a number of interconnected modules representing key stages of visual analysis, with each module learning to exhibit desired local properties through competition. With this system level model, we evaluate whether a CP effect can be generated with task influence to only the later stages of visual analysis. Our model demonstrates that task learning in just the later stages is sufficient for the model to exhibit the CP effect, demonstrating the existence of a mechanism that requires only a high-level of task influence. However, the effect generalizes more widely than is found with human participants, suggesting that changes to earlier stages of analysis may also be involved in the human CP effect, even if these are not fundamental to the development of CP. The model prompts a hybrid account of task-based influences on perception that involves both modifications to the use of the outputs from early perceptual analysis along with the possibility of changes to the nature of that early analysis itself.
Collapse
|
36
|
Interactive Coding of Visual Spatial Frequency and Auditory Amplitude-Modulation Rate. Curr Biol 2012; 22:383-8. [DOI: 10.1016/j.cub.2012.01.004] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2011] [Revised: 12/22/2011] [Accepted: 01/04/2012] [Indexed: 11/20/2022]
|
37
|
Rousselet GA, Pernet CR, Caldara R, Schyns PG. Visual Object Categorization in the Brain: What Can We Really Learn from ERP Peaks? Front Hum Neurosci 2011; 5:156. [PMID: 22144959 PMCID: PMC3228234 DOI: 10.3389/fnhum.2011.00156] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Accepted: 11/14/2011] [Indexed: 11/13/2022] Open
Affiliation(s)
- Guillaume A Rousselet
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | | | | | | |
Collapse
|
38
|
Awasthi B, Friedman J, Williams MA. Faster, stronger, lateralized: Low spatial frequency information supports face processing. Neuropsychologia 2011; 49:3583-90. [DOI: 10.1016/j.neuropsychologia.2011.08.027] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2011] [Revised: 08/25/2011] [Accepted: 08/31/2011] [Indexed: 11/28/2022]
|
39
|
Gao Z, Bentin S. Coarse-to-fine encoding of spatial frequency information into visual short-term memory for faces but impartial decay. J Exp Psychol Hum Percept Perform 2011; 37:1051-64. [PMID: 21500938 PMCID: PMC3240681 DOI: 10.1037/a0023091] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Face perception studies investigated how spatial frequencies (SF) are extracted from retinal display while forming a perceptual representation, or their selective use during task-imposed categorization. Here we focused on the order of encoding low-spatial frequencies (LSF) and high-spatial frequencies (HSF) from perceptual representations into visual short-term memory (VSTM). We also investigated whether different SF-ranges decay from VSTM at different rates during a study-test stimulus-onset asynchrony. An old/new VSTM paradigm was used in which two broadband faces formed the positive set and the probes preserved either low or high SF ranges. Exposure time of 500 ms was sufficient to encode both HSF and LSF in the perceptual representation (experiment 1). Nevertheless, when the positive-set was exposed for 500 ms, LSF-probes were better recognized in VSTM compared with HSF-probes; this effect vanished at 800-ms exposure time (experiment 2). Backward masking the positive set exposed for 800 ms re-established the LSF-probes advantage (experiment 3). The speed of decay up to 10 seconds was similar for LSF- and HSF-probes (experiment 4). These results indicate that LSF are extracted and consolidated into VSTM faster than HSF, supporting a coarse-to-fine order, while the decay from VSTM is not governed by SF.
Collapse
Affiliation(s)
- Zaifeng Gao
- Department of Psychology, Zhejiang University, Hangzhou, People’s Republic of China
| | | |
Collapse
|
40
|
Abstract
Humans extract visual information from the world through spatial frequency (SF) channels that are sensitive to different scales of light-dark fluctuations across visual space. Using two methods, we measured human SF tuning for discriminating videos of human actions (walking, running, skipping and jumping). The first, more traditional, approach measured signal-to-noise ratio (s/n) thresholds for videos filtered by one of six Gaussian band-pass filters ranging from 4 to 128 cycles/image. The second approach used SF “bubbles”, Willenbockel et al. (Journal of Experimental Psychology. Human Perception and Performance, 36(1), 122–135, 2010), which randomly filters the entire SF domain on each trial and uses reverse correlation to estimate SF tuning. Results from both methods were consistent and revealed a diagnostic SF band centered between 12-16 cycles/image (about 1-1.25 cycles/body width). Efficiency on this task was estimated by comparing s/n thresholds for humans to an ideal observer, and was estimated to be quite low (>.04%) for both experiments.
Collapse
|
41
|
Borgo R, Proctor K, Chen M, Jänicke H, Murray T, Thornton IM. Evaluating the impact of task demands and block resolution on the effectiveness of pixel-based visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2010; 16:963-972. [PMID: 20975133 DOI: 10.1109/tvcg.2010.150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Pixel-based visualization is a popular method of conveying large amounts of numerical data graphically. Application scenarios include business and finance, bioinformatics and remote sensing. In this work, we examined how the usability of such visual representations varied across different tasks and block resolutions. The main stimuli consisted of temporal pixel-based visualization with a white-red color map, simulating monthly temperature variation over a six-year period. In the first study, we included 5 separate tasks to exert different perceptual loads. We found that performance varied considerably as a function of task, ranging from 75% correct in low-load tasks to below 40% in high-load tasks. There was a small but consistent effect of resolution, with the uniform patch improving performance by around 6% relative to higher block resolution. In the second user study, we focused on a high-load task for evaluating month-to-month changes across different regions of the temperature range. We tested both CIE L*u*v* and RGB color spaces. We found that the nature of the change-evaluation errors related directly to the distance between the compared regions in the mapped color space. We were able to reduce such errors by using multiple color bands for the same data range. In a final study, we examined more fully the influence of block resolution on performance, and found block resolution had a limited impact on the effectiveness of pixel-based visualization.
Collapse
|
42
|
Kihara K, Takeda Y. Time course of the integration of spatial frequency-based information in natural scenes. Vision Res 2010; 50:2158-62. [DOI: 10.1016/j.visres.2010.08.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2010] [Revised: 08/11/2010] [Accepted: 08/12/2010] [Indexed: 10/19/2022]
|
43
|
Goffaux V, Dakin SC. Horizontal information drives the behavioral signatures of face processing. Front Psychol 2010; 1:143. [PMID: 21833212 PMCID: PMC3153761 DOI: 10.3389/fpsyg.2010.00143] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2010] [Accepted: 08/03/2010] [Indexed: 11/13/2022] Open
Abstract
Recent psychophysical evidence indicates that the vertical arrangement of horizontal information is particularly important for encoding facial identity. In this paper we extend this notion to examine the role that information at different (particularly cardinal) orientations might play in a number of established phenomena each a behavioral “signature” of face processing. In particular we consider (a) the face inversion effect (FIE), (b) the facial identity after-effect, (c) face-matching across viewpoint, and (d) interactive, so-called holistic, processing of face parts. We report that filtering faces to remove all but the horizontal information largely preserves these effects but conversely, retaining vertical information generally diminishes or abolishes them. We conclude that preferential processing of horizontal information is a central feature of human face processing that supports many of the behavioral signatures of this critical visual operation.
Collapse
Affiliation(s)
- Valérie Goffaux
- Department of Neurocognition, Maastricht University Maastricht, Netherlands
| | | |
Collapse
|
44
|
Delorme A, Richard G, Fabre-Thorpe M. Key visual features for rapid categorization of animals in natural scenes. Front Psychol 2010; 1:21. [PMID: 21607075 PMCID: PMC3095379 DOI: 10.3389/fpsyg.2010.00021] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2010] [Accepted: 05/26/2010] [Indexed: 11/13/2022] Open
Abstract
In speeded categorization tasks, decisions could be based on diagnostic target features or they may need the activation of complete representations of the object. Depending on task requirements, the priming of feature detectors through top-down expectation might lower the threshold of selective units or speed up the rate of information accumulation. In the present paper, 40 subjects performed a rapid go/no-go animal/non-animal categorization task with 400 briefly flashed natural scenes to study how performance depends on physical scene characteristics, target configuration, and the presence or absence of diagnostic animal features. Performance was evaluated both in terms of accuracy and speed and d' curves were plotted as a function of reaction time (RT). Such d' curves give an estimation of the processing dynamics for studied features and characteristics over the entire subject population. Global image characteristics such as color and brightness do not critically influence categorization speed, although they slightly influence accuracy. Global critical factors include the presence of a canonical animal posture and animal/background size ratio suggesting the role of coarse global form. Performance was best for both accuracy and speed, when the animal was in a typical posture and when it occupied about 20-30% of the image. The presence of diagnostic animal features was another critical factor. Performance was significantly impaired both in accuracy (drop 3.3-7.5%) and speed (median RT increase 7-16 ms) when diagnostic animal parts (eyes, mouth, and limbs) were missing. Such animal features were shown to influence performance very early when only 15-25% of the response had been produced. In agreement with other experimental and modeling studies, our results support fast diagnostic recognition of animals based on key intermediate features and priming based on the subject's expertise.
Collapse
Affiliation(s)
- Arnaud Delorme
- Université de Toulouse, Université Paul Sabatier, Centre de Recherche Cerveau et CognitionToulouse, France
- Centre National de la Recherche Scientifique, Centre de Recherche Cerveau et CognitionToulouse, France
| | - Ghislaine Richard
- Université de Toulouse, Université Paul Sabatier, Centre de Recherche Cerveau et CognitionToulouse, France
- Centre National de la Recherche Scientifique, Centre de Recherche Cerveau et CognitionToulouse, France
| | - Michele Fabre-Thorpe
- Université de Toulouse, Université Paul Sabatier, Centre de Recherche Cerveau et CognitionToulouse, France
- Centre National de la Recherche Scientifique, Centre de Recherche Cerveau et CognitionToulouse, France
| |
Collapse
|
45
|
Flevaris AV, Bentin S, Robertson LC. Local or global? Attentional selection of spatial frequencies binds shapes to hierarchical levels. Psychol Sci 2010; 21:424-31. [PMID: 20424080 DOI: 10.1177/0956797609359909] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Contrary to the traditional view that shapes and their hierarchical level (local or global) are a priori integrated in perception, recent evidence suggests that the identity of a shape and its level are encoded independently, implying the need for shape-level binding to account for normal perception. What is the binding mechanism in this case? Using hierarchically arranged letter shapes, we obtained evidence that the left hemisphere has a preference for binding shapes to the local level, whereas the right hemisphere has a preference for binding shapes to the global level. More important, binding is modulated by attentional selection of higher or lower spatial frequencies. Attention to higher spatial frequencies facilitated subsequent binding by the left hemisphere of elements to the local level, whereas attention to lower spatial frequencies facilitated subsequent binding by the right hemisphere of elements to the global level.
Collapse
Affiliation(s)
- Anastasia V Flevaris
- Department of Psychology, University of California, Berkeley, 3210 Tolman Hall, Berkeley, CA 94720-1650, USA.
| | | | | |
Collapse
|
46
|
de Gardelle V, Kouider S. How spatial frequencies and visual awareness interact during face processing. Psychol Sci 2009; 21:58-66. [PMID: 20424024 DOI: 10.1177/0956797609354064] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In vision, high and low spatial frequencies have been dissociated at the cognitive and neural levels. Usually, high spatial frequency (HSF) is associated with slow analysis along the ventral cortical stream, and low spatial frequency (LSF) is associated with fast and automatic processing. These findings suggest a specific relation between spatial-frequency processing and visual awareness. We investigated this issue using masked-face priming with hybrid prime images of variable visibility. We found subliminal priming for both LSF and HSF information, along with a strong interaction between spatial frequency and visibility: HSF-related priming increased with stimulus visibility, whereas LSF influences remained unchanged. We argue that the results limit the validity of the coarse-to-fine model of vision and of models equating ventral-stream activity with perceptual awareness. Interpreting our results in light of the diagnostic approach suggests a close relation between awareness and diagnosticity.
Collapse
Affiliation(s)
- Vincent de Gardelle
- Laboratoire des Sciences Cognitives et Psycholinguistique, CNRS/EHESS/DEC-ENS, Paris, France.
| | | |
Collapse
|
47
|
van Rijsbergen NJ, Schyns PG. Dynamics of trimming the content of face representations for categorization in the brain. PLoS Comput Biol 2009; 5:e1000561. [PMID: 19911045 PMCID: PMC2768819 DOI: 10.1371/journal.pcbi.1000561] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2009] [Accepted: 10/13/2009] [Indexed: 11/23/2022] Open
Abstract
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300. How the brain uses visual information to construct representations of categories is a central question of cognitive neuroscience. With our methods we visualize how the brain transforms its representations of facial expressions. Using electroencephalographic data, we analyze how representations change over the first 450 ms of processing both in feature content (e.g., which aspects of the face, such as the eyes or the mouth are represented across time) and level of detail. We show that facial expressions are initially encoded with most of their features (i.e., mouth and eyes) across all levels of details in the occipito-temporal regions. In a later phase, we show that a gradual reorganization of representations occurs, whereby only task relevant face features are kept (e.g., the mouth in “happy”) at only the finest level of details. We describe this elimination of irrelevant and redundant information as ‘trimming’. We suggest that this may be an example of the brain optimizing categorical representations.
Collapse
|
48
|
Harel A, Bentin S. Stimulus type, level of categorization, and spatial-frequencies utilization: implications for perceptual categorization hierarchies. J Exp Psychol Hum Percept Perform 2009; 35:1264-73. [PMID: 19653764 DOI: 10.1037/a0013621] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The absence of low frequencies impaired the categorization of faces similarly at both levels, whereas the absence of high frequencies was inconsequential throughout. In contrast, basic-level categorization of airplanes was equally impaired by the absence of either low or high frequencies, whereas at the subordinate level, the absence of high frequencies had more deleterious effects. These data suggest that categorization of faces either at the basic level or by race is based primarily on their global shape but also on the configuration of details. By contrast, basic-level categorization of objects is based on their global shape, whereas category-specific diagnostic details determine the information needed for their subordinate categorization. The authors conclude that the entry point in visual recognition is flexible and determined conjointly by the stimulus category and the level of categorization, which reflects the observer's recognition goal.
Collapse
Affiliation(s)
- Assaf Harel
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem 91905, Israel.
| | | |
Collapse
|
49
|
Smith FW, Schyns PG. Smile through your fear and sadness: transmitting and identifying facial expression signals over a range of viewing distances. Psychol Sci 2009; 20:1202-8. [PMID: 19694983 DOI: 10.1111/j.1467-9280.2009.02427.x] [Citation(s) in RCA: 96] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
It is well established that animal communication signals have adapted to the evolutionary pressures of their environment. For example, the low-frequency vocalizations of the elephant are tailored to long-range communications, whereas the high-frequency trills of birds are adapted to their more localized acoustic niche. Like the voice, the human face transmits social signals about the internal emotional state of the transmitter. Here, we address two main issues: First, we characterized the spectral composition of the facial features signaling each of the six universal expressions of emotion (happiness, sadness, fear, disgust, anger, and surprise). From these analyses, we then predicted and tested the effectiveness of the transmission of emotion signals over different viewing distances. We reveal a gradient of recognition over viewing distances constraining the relative adaptive usefulness of facial expressions of emotion (distal expressions are good signals over a wide range of viewing distances; proximal expressions are suited to closer-range communication).
Collapse
Affiliation(s)
- Fraser W Smith
- Centre for Cognitive Neuroimaging and Department of Psychology, 58 Hillhead St., University of Glasgow, Glasgow G12 8QB, United Kingdom.
| | | |
Collapse
|
50
|
Pilz KS, Bülthoff HH, Vuong QC. Learning influences the encoding of static and dynamic faces and their recognition across different spatial frequencies. VISUAL COGNITION 2009. [DOI: 10.1080/13506280802340588] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|