1
|
Papale P, Zuiderbaan W, Teeuwen RRM, Gilhuis A, Self MW, Roelfsema PR, Dumoulin SO. V1 neurons are tuned to perceptual borders in natural scenes. Proc Natl Acad Sci U S A 2024; 121:e2221623121. [PMID: 39495929 DOI: 10.1073/pnas.2221623121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 09/30/2024] [Indexed: 11/06/2024] Open
Abstract
The visual system needs to identify perceptually relevant borders to segment complex natural scenes. The primary visual cortex (V1) is thought to extract local borders, and higher visual areas are thought to identify the perceptually relevant borders between objects and the background. To test this conjecture, we used natural images that had been annotated by human observers who marked the perceptually relevant borders. We assessed the effect of perceptual relevance on V1 responses using human neuroimaging, macaque electrophysiology, and computational modeling. We report that perceptually relevant borders elicit stronger responses in the early visual cortex than irrelevant ones, even if simple features, such as contrast and the energy of oriented filters, are matched. Moreover, V1 neurons discriminate perceptually relevant borders surprisingly fast, during the early feedforward-driven activity at a latency of ~50 ms, indicating that they are tuned to the features that characterize them. We also revealed a delayed, contextual effect that enhances the V1 responses that are elicited by perceptually relevant borders at a longer latency. Our results reveal multiple mechanisms that allow V1 neurons to infer the layout of objects in natural images.
Collapse
Affiliation(s)
- Paolo Papale
- Department of Vision and Cognition, Netherlands Institute for Neuroscience (KNAW), Amsterdam 1105 BA, Netherlands
- Momilab Research Unit, Institutions, Markets, Technologies School for Advanced Studies Lucca, Lucca 55100, Italy
| | - Wietske Zuiderbaan
- Department of Computational Cognitive Neuroscience and Neuroimaging, Netherlands Institute for Neuroscience (Koninklijke Nederlandse Akademie van Wetenschappen), Amsterdam 1105 BA, Netherlands
- Spinoza Centre for Neuroimaging, Amsterdam 1105 BK, Netherlands
| | - Rob R M Teeuwen
- Department of Vision and Cognition, Netherlands Institute for Neuroscience (KNAW), Amsterdam 1105 BA, Netherlands
| | - Amparo Gilhuis
- Department of Vision and Cognition, Netherlands Institute for Neuroscience (KNAW), Amsterdam 1105 BA, Netherlands
| | - Matthew W Self
- Department of Vision and Cognition, Netherlands Institute for Neuroscience (KNAW), Amsterdam 1105 BA, Netherlands
| | - Pieter R Roelfsema
- Department of Vision and Cognition, Netherlands Institute for Neuroscience (KNAW), Amsterdam 1105 BA, Netherlands
- Department of Integrative Neurophysiology, Vrije Universiteit Amsterdam 1081 HV, Netherlands
- Department of Neurosurgery, Academic Medical Centre, Amsterdam 1100 DD, Netherlands
- Laboratory of Visual Brain Therapy, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Sorbonne Université, Paris F-75012, France
| | - Serge O Dumoulin
- Department of Computational Cognitive Neuroscience and Neuroimaging, Netherlands Institute for Neuroscience (Koninklijke Nederlandse Akademie van Wetenschappen), Amsterdam 1105 BA, Netherlands
- Spinoza Centre for Neuroimaging, Amsterdam 1105 BK, Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam 1181 BT, Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht 3584 CS, Netherlands
| |
Collapse
|
2
|
Prantner S, Giménez-García C, Espino-Payá A, Escrig MA, Ruiz-Padial E, Ballester-Arnal R, Pastor MC. The standardization of a new Explicit Pornographic Picture Set (EPPS). Behav Res Methods 2024; 56:7261-7279. [PMID: 38693442 PMCID: PMC11362205 DOI: 10.3758/s13428-024-02418-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2024] [Indexed: 05/03/2024]
Abstract
Pictures with affective content have been extensively used in scientific studies of emotion and sexuality. However, only a few standardized picture sets have been developed that offer explicit images, with most lacking pornographic pictures depicting diverse sexual practices. This study aimed to fill this gap through developing a standardized affective set of diverse pornographic pictures (masturbation, oral sex, vaginal sex, anal sex, group sex, paraphilia) of same-sex and opposite-sex content, offering dimensional affective ratings of valence, arousal, and dominance, as well as co-elicited discrete emotions (disgust, moral and ethical acceptance). In total, 192 pornographic pictures acquired from online pornography platforms and 24 control IAPS images have been rated by 319 participants (Mage = 22.66, SDage = 4.66) with self-reported same- and opposite-sex sexual attraction. Stimuli were representative of the entire affective space, including positively and negatively perceived pictures. Participants showed differential affective perception of pornographic pictures according to gender and sexual attraction. Differences in affective ratings related to participants' gender and sexual attraction, as well as stimuli content (depicted sexual practices and sexes). From the stimuli set, researchers can select explicit pornographic pictures based on the obtained affective ratings and technical parameters (i.e., pixel size, luminosity, color space, contrast, chromatic complexity, spatial frequency, entropy). The stimuli set may be considered a valid tool of diverse explicit pornographic pictures covering the affective space, in particular, for women and men with same- and opposite-sex sexual attraction. This new explicit pornographic picture set (EPPS) is available to the scientific community for non-commercial use.
Collapse
Affiliation(s)
- Sabine Prantner
- Departamento de Psicología Básica, Clínica y Psicobiología, Facultad de Ciencias de la Salud, Universitat Jaume I, Castelló de la Plana, Spain
| | - Cristina Giménez-García
- Departamento de Psicología Básica, Clínica y Psicobiología, Facultad de Ciencias de la Salud, Universitat Jaume I, Castelló de la Plana, Spain
| | - Alejandro Espino-Payá
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Miguel A Escrig
- Departamento de Psicología, Facultad de Ciencias de la Salud, Universidad Europea de Valencia, Valencia, Spain
| | | | - Rafael Ballester-Arnal
- Departamento de Psicología Básica, Clínica y Psicobiología, Facultad de Ciencias de la Salud, Universitat Jaume I, Castelló de la Plana, Spain
| | - M Carmen Pastor
- Departamento de Psicología Básica, Clínica y Psicobiología, Facultad de Ciencias de la Salud, Universitat Jaume I, Castelló de la Plana, Spain.
| |
Collapse
|
3
|
Ji L, Chen Z, Zeng X, Sun B, Fu S. Automatic processing of unattended mean emotion: Evidence from visual mismatch responses. Neuropsychologia 2024; 202:108963. [PMID: 39069120 DOI: 10.1016/j.neuropsychologia.2024.108963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 06/12/2024] [Accepted: 07/26/2024] [Indexed: 07/30/2024]
Abstract
The mean emotion from multiple facial expressions can be extracted rapidly and precisely. However, it remains debated whether mean emotion processing is automatic which can occur under no attention. To address this question, we used a passive oddball paradigm and recorded event-related brain potentials when participants discriminated the changes in the central fixation while a set of four faces was presented in the periphery. The face set consisted of one happy and three angry expressions (mean negative) or one angry and three happy expressions (mean positive), and the mean negative and mean positive face sets were shown with a probability of 20% (deviant) and 80% (standard) respectively in the sequence, or the vice versa. The cluster-based permutation analyses showed that the visual mismatch negativity started early at around 92 ms and was also observed in later time windows when the mean emotion was negative, while a mismatch positivity was observed at around 168-266 ms when the mean emotion was positive. The results suggest that there might be different mechanisms underlying the processing of mean negative and mean positive emotions. More importantly, the brain can detect the changes in the mean emotion automatically, and ensemble coding for multiple facial expressions can occur in an automatic fashion without attention.
Collapse
Affiliation(s)
- Luyan Ji
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China.
| | - Zilong Chen
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Xianqing Zeng
- School of Psychology, South China Normal University, Guangzhou, China
| | - Bo Sun
- Institute of Psychology and Behavior, Henan University, Kaifeng, China
| | - Shimin Fu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| |
Collapse
|
4
|
Brook L, Kreichman O, Masarwa S, Gilaie-Dotan S. Higher-contrast images are better remembered during naturalistic encoding. Sci Rep 2024; 14:13445. [PMID: 38862623 PMCID: PMC11166978 DOI: 10.1038/s41598-024-63953-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/04/2024] [Indexed: 06/13/2024] Open
Abstract
It is unclear whether memory for images of poorer visibility (as low contrast or small size) will be lower due to weak signals elicited in early visual processing stages, or perhaps better since their processing may entail top-down processes (as effort and attention) associated with deeper encoding. We have recently shown that during naturalistic encoding (free viewing without task-related modulations), for image sizes between 3°-24°, bigger images stimulating more visual system processing resources at early processing stages are better remembered. Similar to size, higher contrast leads to higher activity in early visual processing. Therefore, here we hypothesized that during naturalistic encoding, at critical visibility ranges, higher contrast images will lead to higher signal-to-noise ratio and better signal quality flowing downstream and will thus be better remembered. Indeed, we found that during naturalistic encoding higher contrast images were remembered better than lower contrast ones (~ 15% higher accuracy, ~ 1.58 times better) for images at 7.5-60 RMS contrast range. Although image contrast and size modulate early visual processing very differently, our results further substantiate that at poor visibility ranges, during naturalistic non-instructed visual behavior, physical image dimensions (contributing to image visibility) impact image memory.
Collapse
Affiliation(s)
- Limor Brook
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel.
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
- UCL Institute of Cognitive Neuroscience, London, UK.
| |
Collapse
|
5
|
Roldán D, Redenbach C, Schladitz K, Kübel C, Schlabach S. Image quality evaluation for FIB-SEM images. J Microsc 2024; 293:98-117. [PMID: 38112173 DOI: 10.1111/jmi.13254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 12/20/2023]
Abstract
Focused ion beam scanning electron microscopy (FIB-SEM) tomography is a serial sectioning technique where an FIB mills off slices from the material sample that is being analysed. After every slicing, an SEM image is taken showing the newly exposed layer of the sample. By combining all slices in a stack, a 3D image of the material is generated. However, specific artefacts caused by the imaging technique distort the images, hampering the morphological analysis of the structure. Typical quality problems in microscopy imaging are noise and lack of contrast or focus. Moreover, specific artefacts are caused by the FIB milling, namely, curtaining and charging artefacts. We propose quality indices for the evaluation of the quality of FIB-SEM data sets. The indices are validated on real and experimental data of different structures and materials.
Collapse
Affiliation(s)
| | | | - Katja Schladitz
- Fraunhofer Institute of Industrial Mathematics, Kaiserslautern, Germany
| | - Christian Kübel
- Institute of Nanotechnology (INT), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Karlsruhe Nano Micro Facility (KNMFi), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Research group in-situ electron microscopy, Joint Research Laboratory Nanomaterials, Department of Materials & Earth Sciences, Technical University Darmstadt, Darmstadt, Germany
| | - Sabine Schlabach
- Institute of Nanotechnology (INT), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Karlsruhe Nano Micro Facility (KNMFi), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Institute for Applied Materials (IAM), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| |
Collapse
|
6
|
Zhang Y, Bi K, Li J, Wang Y, Fang F. Dyadic visual perceptual learning on orientation discrimination. Curr Biol 2023:S0960-9822(23)00552-3. [PMID: 37224810 DOI: 10.1016/j.cub.2023.04.070] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 02/24/2023] [Accepted: 04/26/2023] [Indexed: 05/26/2023]
Abstract
The belief that learning can be modulated by social context is mainly supported by high-level value-based learning studies. However, whether social context can even modulate low-level learning such as visual perceptual learning (VPL) is still unknown. Unlike traditional VPL studies in which participants were trained singly, here, we developed a novel dyadic VPL paradigm in which paired participants were trained with the same orientation discrimination task and could monitor each other's performance. We found that the social context (i.e., dyadic training) led to a greater behavioral performance improvement and a faster learning rate compared with the single training. Interestingly, the facilitating effects could be modulated by the performance difference between paired participants. Functional magnetic resonance imaging (fMRI) results showed that, compared with the single training, social cognition areas including bilateral parietal cortex and dorsolateral prefrontal cortex displayed a different activity pattern and enhanced functional connectivities to early visual cortex (EVC) during the dyadic training. Furthermore, the dyadic training resulted in more refined orientation representation in primary visual cortex (V1), which was closely associated with the greater behavioral performance improvement. Taken together, we demonstrate that the social context, learning with a partner, can remarkably augment the plasticity of low-level visual information process by means of reshaping the neural activities in EVC and social cognition areas, as well as their functional interplays.
Collapse
Affiliation(s)
- Yifei Zhang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Keyan Bi
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Jian Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Yizhou Wang
- Center on Frontiers of Computing Studies, School of Computer Science, Peking University, Beijing 100871, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China.
| |
Collapse
|
7
|
Schuurmans JP, Bennett MA, Petras K, Goffaux V. Backward masking reveals coarse-to-fine dynamics in human V1. Neuroimage 2023; 274:120139. [PMID: 37137434 DOI: 10.1016/j.neuroimage.2023.120139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 05/05/2023] Open
Abstract
Natural images exhibit luminance variations aligned across a broad spectrum of spatial frequencies (SFs). It has been proposed that, at early stages of processing, the coarse signals carried by the low SF (LSF) of the visual input are sent rapidly from primary visual cortex (V1) to ventral, dorsal and frontal regions to form a coarse representation of the input, which is later sent back to V1 to guide the processing of fine-grained high SFs (i.e., HSF). We used functional resonance imaging (fMRI) to investigate the role of human V1 in the coarse-to-fine integration of visual input. We disrupted the processing of the coarse and fine content of full-spectrum human face stimuli via backward masking of selective SF ranges (LSFs: <1.75cpd and HSFs: >1.75cpd) at specific times (50, 83, 100 or 150ms). In line with coarse-to-fine proposals, we found that (1) the selective masking of stimulus LSF disrupted V1 activity in the earliest time window, and progressively decreased in influence, while (2) an opposite trend was observed for the masking of stimulus' HSF. This pattern of activity was found in V1, as well as in ventral (i.e. the Fusiform Face area, FFA), dorsal and orbitofrontal regions. We additionally presented subjects with contrast negated stimuli. While contrast negation significantly reduced response amplitudes in the FFA, as well as coupling between FFA and V1, coarse-to-fine dynamics were not affected by this manipulation. The fact that V1 response dynamics to strictly identical stimulus sets differed depending on the masked scale adds to growing evidence that V1 role goes beyond the early and quasi-passive transmission of visual information to the rest of the brain. It instead indicates that V1 may yield a 'spatially registered common forum' or 'blackboard' that integrates top-down inferences with incoming visual signals through its recurrent interaction with high-level regions located in the inferotemporal, dorsal and frontal regions.
Collapse
Affiliation(s)
- Jolien P Schuurmans
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium.
| | - Matthew A Bennett
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium; Institute of Neuroscience (IONS), UC Louvain, Louvain-la-Neuve, Belgium
| | - Kirsten Petras
- Integrative Neuroscience and Cognition Center, CNRS, Université Paris Cité, Paris, France
| | - Valérie Goffaux
- Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium; Institute of Neuroscience (IONS), UC Louvain, Louvain-la-Neuve, Belgium; Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
8
|
Zhou T, Hu D, Qiu D, Yu S, Huang Y, Sun Z, Sun X, Zhou G, Sun T, Peng H. Analysis of Light Penetration Depth in Apple Tissues by Depth-Resolved Spatial-Frequency Domain Imaging. Foods 2023; 12:foods12091783. [PMID: 37174321 PMCID: PMC10177930 DOI: 10.3390/foods12091783] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 04/13/2023] [Accepted: 04/23/2023] [Indexed: 05/15/2023] Open
Abstract
Spatial-frequency domain imaging (SFDI) has been developed as an emerging modality for detecting early-stage bruises of fruits, such as apples, due to its unique advantage of a depth-resolved imaging feature. This paper presents theoretical and experimental analyses to determine the light penetration depth in apple tissues under spatially modulated illumination. Simulation and practical experiments were then carried out to explore the maximum light penetration depths in 'Golden Delicious' apples. Then, apple experiments for early-stage bruise detection using the estimated reduced scattering coefficient mapping were conducted to validate the results of light penetration depths. The results showed that the simulations produced comparable or a little larger light penetration depth in apple tissues (~2.2 mm) than the practical experiment (~1.8 mm or ~2.3 mm). Apple peel further decreased the light penetration depth due to the high absorption properties of pigment contents. Apple bruises located beneath the surface peel with the depth of about 0-1.2 mm could be effectively detected by the SFDI technique. This study, to our knowledge, made the first effort to investigate the light penetration depth in apple tissues by SFDI, which would provide useful information for enhanced detection of early-stage apple bruising by selecting the appropriate spatial frequency.
Collapse
Affiliation(s)
- Tongtong Zhou
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Dong Hu
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Dekai Qiu
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Shengqi Yu
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Yuping Huang
- College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
| | - Zhizhong Sun
- College of Chemistry and Materials Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Xiaolin Sun
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Guoquan Zhou
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Tong Sun
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| | - Hehuan Peng
- College of Optical Mechanical and Electrical Engineering, Zhejiang A&F University, Hangzhou 311300, China
| |
Collapse
|
9
|
Zheng K, Liang D, Wang X, Han Y, Griesser M, Liu Y, Fan P. Contrasting coloured ventral wings are a visual collision avoidance signal in birds. Proc Biol Sci 2022; 289:20220678. [PMID: 35858052 PMCID: PMC9257291 DOI: 10.1098/rspb.2022.0678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Collisions between fast-moving objects often cause severe damage, but collision avoidance mechanisms of fast-moving animals remain understudied. Particularly, birds can fly fast and often in large groups, raising the question of how individuals avoid in-flight collisions that are potentially lethal. We tested the collision-avoidance hypothesis, which proposes that conspicuously contrasting ventral wings are visual signals that help birds to avoid collisions. We scored the ventral wing contrasts for a global dataset of 1780 bird species. Phylogenetic comparative analyses showed that larger species had more contrasting ventral wings than smaller species, and that in larger species, colonial breeders had more contrasting ventral wings than non-colonial breeders. Evidently, larger species have lower manoeuvrability than smaller species, and colonial-breeding species frequently encounter con- and heterospecifics, increasing their risk of in-flight collisions. Thus, more contrasting ventral wing patterns in these species are a sensory mechanism that facilitates collision avoidance.
Collapse
Affiliation(s)
- Kaidan Zheng
- School of Life Sciences, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Dan Liang
- School of Life Sciences, Sun Yat-sen University, Guangzhou, People's Republic of China,Princeton School of Public and International Affairs, Princeton University, Princeton, NJ 08540, USA
| | - Xuwen Wang
- Eli Lilly and Company, Indianapolis, IN 46225, USA
| | - Yuqing Han
- School of Life Sciences, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Michael Griesser
- Department of Biology, University of Konstanz, Konstanz, Germany,Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz, Germany,Department of Collective Behavior, Max Planck Institute of Animal Behavior, Konstanz, Germany
| | - Yang Liu
- School of Ecology, Sun Yat-sen University, Shenzhen, People's Republic of China,State Key Laboratory of Biological Control, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Pengfei Fan
- School of Life Sciences, Sun Yat-sen University, Guangzhou, People's Republic of China,State Key Laboratory of Biological Control, Sun Yat-sen University, Guangzhou, People's Republic of China
| |
Collapse
|
10
|
Emotion schema effects on associative memory differ across emotion categories at the behavioural, physiological and neural level: Emotion schema effects on associative memory differs for disgust and fear. Neuropsychologia 2022; 172:108257. [PMID: 35561814 DOI: 10.1016/j.neuropsychologia.2022.108257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 05/02/2022] [Accepted: 05/03/2022] [Indexed: 11/23/2022]
Abstract
Previous behavioural and neuroimaging studies have consistently reported that memory is enhanced for associations congruent or incongruent with the structure of prior knowledge, termed as schemas. However, it remains unclear if similar effects arise with emotion-related associations, and whether they depend on the type of emotions. Here, we addressed this question using a novel face-word pair association paradigm combined with fMRI and eye-tracking techniques. In two independent studies, we demonstrated and replicated that both congruency with emotion schemas and emotion category interact to affect associative memory. Overall, memory retrieval was higher for faces from pairs congruent vs. incongruent with emotion schemas, paralleled by a greater recruitment of left inferior frontal gyrus (IFG) during successful encoding. However, emotion schema effects differed across two negative emotion categories. Disgust was remembered better than fear, and only disgust activated left IFG stronger during encoding of congruent vs. incongruent pairs, suggestive of deeper semantic processing for the associations. On the contrary, encoding of congruent fear vs. disgust-related pairs was accompanied with greater activity in right fusiform gyrus (FG), suggesting a stronger sensory processing of faces. In addition, successful memory formation for congruent disgust pairs was associated with a higher pupil dilation index related to sympathetic activation, longer gaze time on words compared to faces, and more gaze switches between paired words and faces. This was reversed for fear-related congruent pairs where the faces attracted longer gaze time (compared to words). Overall, our results provide converging evidence from behavioural, physiological, and neural measures to suggest that congruency with available emotion schemas influence memory associations in a similar manner to semantic schemas. However, these effects vary across distinct emotion categories, pointing to a differential role of semantic processing and visual attention processes in the modulation of memory by disgust and fear, respectively.
Collapse
|
11
|
Abstract
The THINGS database is a freely available stimulus set that has the potential to facilitate the generation of theory that bridges multiple areas within cognitive neuroscience. The database consists of 26,107 high quality digital photos that are sorted into 1,854 concepts. While a valuable resource, relatively few technical details relevant to the design of studies in cognitive neuroscience have been described. We present an analysis of two key low-level properties of THINGS images, luminance and luminance contrast. These image statistics are known to influence common physiological and neural correlates of perceptual and cognitive processes. In general, we found that the distributions of luminance and contrast are in close agreement with the statistics of natural images reported previously. However, we found that image concepts are separable in their luminance and contrast: we show that luminance and contrast alone are sufficient to classify images into their concepts with above chance accuracy. We describe how these factors may confound studies using the THINGS images, and suggest simple controls that can be implemented a priori or post-hoc. We discuss the importance of using such natural images as stimuli in psychological research.
Collapse
Affiliation(s)
- William J Harrison
- Queensland Brain Institute and School of Psychology, 1974The University of Queensland
| |
Collapse
|
12
|
Ohara M, Kim J, Koida K. The Role of Specular Reflections and Illumination in the Perception of Thickness in Solid Transparent Objects. Front Psychol 2022; 13:766056. [PMID: 35250710 PMCID: PMC8891632 DOI: 10.3389/fpsyg.2022.766056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 01/17/2022] [Indexed: 11/24/2022] Open
Abstract
Specular reflections and refractive distortions are complex image properties of solid transparent objects, but despite this complexity, we readily perceive the 3D shapes of these objects (e.g., glass and clear plastic). We have found in past work that relevant sources of scene complexity have differential effects on 3D shape perception, with specular reflections increasing perceived thickness, and refractive distortions decreasing perceived thickness. In an object with both elements, such as glass, the two optical properties may complement each other to support reliable perception of 3D shape. We investigated the relative dominance of specular reflection and refractive distortions in the perception of shape. Surprisingly, the ratio of specular reflection to refractive component was almost equal to that of ordinary glass and ice, which promote correct percepts of 3D shape. The results were also explained by the variance in local RMS contrast in stimulus images but may depend on overall luminance and contrast of the surrounding light field.
Collapse
Affiliation(s)
- Masakazu Ohara
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| | - Juno Kim
- School of Optometry and Vision Science, University of New South Wales, Sydney, NSW, Australia
| | - Kowa Koida
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan.,Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, Toyohashi, Japan
| |
Collapse
|
13
|
Rideaux R, West RK, Wallis TSA, Bex PJ, Mattingley JB, Harrison WJ. Spatial structure, phase, and the contrast of natural images. J Vis 2022; 22:4. [PMID: 35006237 PMCID: PMC8762697 DOI: 10.1167/jov.22.1.4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 11/25/2021] [Indexed: 11/24/2022] Open
Abstract
The sensitivity of the human visual system is thought to be shaped by environmental statistics. A major endeavor in vision science, therefore, is to uncover the image statistics that predict perceptual and cognitive function. When searching for targets in natural images, for example, it has recently been proposed that target detection is inversely related to the spatial similarity of the target to its local background. We tested this hypothesis by measuring observers' sensitivity to targets that were blended with natural image backgrounds. Targets were designed to have a spatial structure that was either similar or dissimilar to the background. Contrary to masking from similarity, we found that observers were most sensitive to targets that were most similar to their backgrounds. We hypothesized that a coincidence of phase alignment between target and background results in a local contrast signal that facilitates detection when target-background similarity is high. We confirmed this prediction in a second experiment. Indeed, we show that, by solely manipulating the phase of a target relative to its background, the target can be rendered easily visible or undetectable. Our study thus reveals that, in addition to its structural similarity, the phase of the target relative to the background must be considered when predicting detection sensitivity in natural images.
Collapse
Affiliation(s)
- Reuben Rideaux
- Queensland Brain Institute, University of Queensland, St. Lucia, Queensland, Australia
| | - Rebecca K West
- School of Psychology, University of Queensland, St. Lucia, Queensland, Australia
| | - Thomas S A Wallis
- Institut für Psychologie & Centre for Cognitive Science, Technische Universität Darmstadt, Darmstadt, Germany
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Jason B Mattingley
- Queensland Brain Institute, University of Queensland, St. Lucia, Queensland, Australia
- School of Psychology, University of Queensland, St. Lucia, Queensland, Australia
| | - William J Harrison
- Queensland Brain Institute, University of Queensland, St. Lucia, Queensland, Australia
- School of Psychology, University of Queensland, St. Lucia, Queensland, Australia
| |
Collapse
|
14
|
Abbas Farishta R, Yang CL, Farivar R. Blur Representation in the Amblyopic Visual System Using Natural and Synthetic Images. Invest Ophthalmol Vis Sci 2022; 63:3. [PMID: 34982147 PMCID: PMC8742520 DOI: 10.1167/iovs.63.1.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Amblyopia is diagnosed as a reduced acuity in an otherwise healthy eye, which indicates that the deficit is not happening in the eye, but in the brain. One suspected mechanism explaining these deficits is an elevated amount of intrinsic blur in the amblyopic visual system compared to healthy observers. This "internally produced blur" can be estimated by the "equivalent intrinsic blur method", which measures blur discrimination thresholds while systematically increasing the external blur in the physical stimulus. Surprisingly, amblyopes do not exhibit elevated intrinsic blur when measured with an edge stimulus. Given the fundamental ways in which they differ, synthetic stimuli, such as edges, are likely to generate contrasting blur perception compared to natural stimuli, such as pictures. Because our visual system is presumably tuned to process natural stimuli, testing artificial stimuli only could result in performances that are not ecologically valid. Methods We tested this hypothesis by measuring, for the first time, the perception of blur added to natural images in amblyopia and compared discrimination performance for natural images and synthetic edges in healthy and amblyopic groups. Results Our results demonstrate that patients with amblyopia exhibit higher levels of intrinsic blur than control subjects when tested on natural images. This difference was not observed when using edges. Conclusions Our results suggest that intrinsic blur is elevated in the visual system representing vision from the amblyopic eye and that distinct statistics of images can generate different blur perception.
Collapse
Affiliation(s)
- Reza Abbas Farishta
- McGill Vision Research, Department of Ophthalmology and Visual Sciences, McGill University, Montréal, Québec, Canada
| | - Charlene L Yang
- McGill Vision Research, Department of Ophthalmology and Visual Sciences, McGill University, Montréal, Québec, Canada
| | - Reza Farivar
- McGill Vision Research, Department of Ophthalmology and Visual Sciences, McGill University, Montréal, Québec, Canada
| |
Collapse
|
15
|
ISIEA: An image database of social inclusion and exclusion in young Asian adults. Behav Res Methods 2021; 54:2409-2421. [PMID: 34918228 PMCID: PMC9579065 DOI: 10.3758/s13428-021-01736-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2021] [Indexed: 12/30/2022]
Abstract
Human beings have a fundamental need to belong. Evaluating and dealing with social exclusion and social inclusion events, which represent negative and positive social interactions, respectively, are closely linked to our physical and mental health. In addition to traditional paradigms that simulate scenarios of social interaction, images are utilized as effective visual stimuli for research on socio-emotional processing and regulation. Since the current mainstream emotional image database lacks social stimuli based on a specific social context, we introduced an open-access image database of social inclusion/exclusion in young Asian adults (ISIEA). This database contains a set of 164 images depicting social interaction scenarios under three categories of social contexts (social exclusion, social neutral, and social inclusion). All images were normatively rated on valence, arousal, inclusion score, and vicarious feeling by 150 participants in Study 1. We additionally examined the relationships between image ratings and the potential factors influencing ratings. The importance of facial expression and social context in the image rating of ISIEA was examined in Study 2. We believe that this database allows researchers to select appropriate materials for socially related studies and to flexibly conduct experimental control.
Collapse
|
16
|
Petras K, Ten Oever S, Dalal SS, Goffaux V. Information redundancy across spatial scales modulates early visual cortical processing. Neuroimage 2021; 244:118613. [PMID: 34563683 PMCID: PMC8591375 DOI: 10.1016/j.neuroimage.2021.118613] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 08/30/2021] [Accepted: 09/20/2021] [Indexed: 01/23/2023] Open
Abstract
Visual images contain redundant information across spatial scales where low spatial frequency contrast is informative towards the location and likely content of high spatial frequency detail. Previous research suggests that the visual system makes use of those redundancies to facilitate efficient processing. In this framework, a fast, initial analysis of low-spatial frequency (LSF) information guides the slower and later processing of high spatial frequency (HSF) detail. Here, we used multivariate classification as well as time-frequency analysis of MEG responses to the viewing of intact and phase scrambled images of human faces to demonstrate that the availability of redundant LSF information, as found in broadband intact images, correlates with a reduction in HSF representational dominance in both early and higher-level visual areas as well as a reduction of gamma-band power in early visual cortex. Our results indicate that the cross spatial frequency information redundancy that can be found in all natural images might be a driving factor in the efficient integration of fine image details.
Collapse
Affiliation(s)
- Kirsten Petras
- Psychological Sciences Research Institute (IPSY), UC Louvain, Belgium; Department of Cognitive Neuroscience, Maastricht University, the Netherlands.
| | - Sanne Ten Oever
- Department of Cognitive Neuroscience, Maastricht University, the Netherlands; Max Planck Institute for Psycholinguistics, the Netherlands; Donders Institute for Cognitive Neuroimaging, Radboud University, the Netherlands
| | - Sarang S Dalal
- Center of Functionally Integrative Neuroscience, Aarhus University, Denmark
| | - Valerie Goffaux
- Psychological Sciences Research Institute (IPSY), UC Louvain, Belgium; Institute of Neuroscience (IONS), UC Louvain, Belgium; Department of Cognitive Neuroscience, Maastricht University, the Netherlands
| |
Collapse
|
17
|
Li H, Ji L, Li Q, Chen W. Individual Faces Were Not Discarded During Extracting Mean Emotion Representations. Front Psychol 2021; 12:713212. [PMID: 34671297 PMCID: PMC8520897 DOI: 10.3389/fpsyg.2021.713212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 08/23/2021] [Indexed: 11/13/2022] Open
Abstract
Individuals can perceive the mean emotion or mean identity of a group of faces. It has been considered that individual representations are discarded when extracting a mean representation; for example, the "element-independent assumption" asserts that the extraction of a mean representation does not depend on recognizing or remembering individual items. The "element-dependent assumption" proposes that the extraction of a mean representation is closely connected to the processing of individual items. The processing mechanism of mean representations and individual representations remains unclear. The present study used a classic member-identification paradigm and manipulated the exposure time and set size to investigate the effect of attentional resources allocated to individual faces on the processing of both the mean emotion representation and individual representations in a set and the relationship between the two types of representations. The results showed that while the precision of individual representations was affected by attentional resources, the precision of the mean emotion representation did not change with it. Our results indicate that two different pathways may exist for extracting a mean emotion representation and individual representations and that the extraction of a mean emotion representation may have higher priority. Moreover, we found that individual faces in a group could be processed to a certain extent even under extremely short exposure time and that the precision of individual representations was relatively poor but individual representations were not discarded.
Collapse
Affiliation(s)
- Huiyun Li
- School of Psychology, Beijing Sport University, Beijing, China.,State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Luyan Ji
- Center for Brain and Cognitive Sciences, Department of Psychology, Faculty of Education, Guangzhou University, Guangzhou, China
| | - Qitian Li
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Wenfeng Chen
- Department of Psychology, Renmin University of China, Beijing, China
| |
Collapse
|
18
|
Harvey JS, Smithson HE. Low level visual features support robust material perception in the judgement of metallicity. Sci Rep 2021; 11:16396. [PMID: 34385496 PMCID: PMC8361131 DOI: 10.1038/s41598-021-95416-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 07/12/2021] [Indexed: 11/19/2022] Open
Abstract
The human visual system is able to rapidly and accurately infer the material properties of objects and surfaces in the world. Yet an inverse optics approach—estimating the bi-directional reflectance distribution function of a surface, given its geometry and environment, and relating this to the optical properties of materials—is both intractable and computationally unaffordable. Rather, previous studies have found that the visual system may exploit low-level spatio-chromatic statistics as heuristics for material judgment. Here, we present results from psychophysics and modeling that supports the use of image statistics heuristics in the judgement of metallicity—the quality of appearance that suggests an object is made from metal. Using computer graphics, we generated stimuli that varied along two physical dimensions: the smoothness of a metal object, and the evenness of its transparent coating. This allowed for the exploration of low-level image statistics, whilst ensuring that each stimulus was a naturalistic, physically plausible image. A conjoint-measurement task decoupled the contributions of these dimensions to the perception of metallicity. Low-level image features, as represented in the activations of oriented linear filters at different spatial scales, were found to correlate with the dimensions of the stimulus space, and decision-making models using these activations replicated observer performance in perceiving differences in metal smoothness and coating bumpiness, and judging metallicity. Importantly, the performance of these models did not deteriorate when objects were rotated within their simulated scene, with corresponding changes in image properties. We therefore conclude that low-level image features may provide reliable cues for the robust perception of metallicity.
Collapse
Affiliation(s)
- Joshua S Harvey
- Neuroscience Institute, NYU Langone Health, New York, NY, 10016, USA. .,Department of Engineering Science, Oxford University, Oxford, OX1 3PJ, UK. .,Department of Experimental Psychology, Oxford University, Oxford, OX2 6GG, UK.
| | - Hannah E Smithson
- Department of Experimental Psychology, Oxford University, Oxford, OX2 6GG, UK
| |
Collapse
|
19
|
Qiu Y, Zhao Z, Klindt D, Kautzky M, Szatko KP, Schaeffel F, Rifai K, Franke K, Busse L, Euler T. Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Curr Biol 2021; 31:3233-3247.e6. [PMID: 34107304 DOI: 10.1016/j.cub.2021.05.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/06/2021] [Accepted: 05/11/2021] [Indexed: 12/29/2022]
Abstract
Pressures for survival make sensory circuits adapted to a species' natural habitat and its behavioral challenges. Thus, to advance our understanding of the visual system, it is essential to consider an animal's specific visual environment by capturing natural scenes, characterizing their statistical regularities, and using them to probe visual computations. Mice, a prominent visual system model, have salient visual specializations, being dichromatic with enhanced sensitivity to green and UV in the dorsal and ventral retina, respectively. However, the characteristics of their visual environment that likely have driven these adaptations are rarely considered. Here, we built a UV-green-sensitive camera to record footage from mouse habitats. This footage is publicly available as a resource for mouse vision research. We found chromatic contrast to greatly diverge in the upper, but not the lower, visual field. Moreover, training a convolutional autoencoder on upper, but not lower, visual field scenes was sufficient for the emergence of color-opponent filters, suggesting that this environmental difference might have driven superior chromatic opponency in the ventral mouse retina, supporting color discrimination in the upper visual field. Furthermore, the upper visual field was biased toward dark UV contrasts, paralleled by more light-offset-sensitive ganglion cells in the ventral retina. Finally, footage recorded at twilight suggests that UV promotes aerial predator detection. Our findings support that natural scene statistics shaped early visual processing in evolution.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Zhijian Zhao
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany
| | - David Klindt
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany
| | - Magdalena Kautzky
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences (GSN), LMU Munich, 82152 Planegg-Martinsried, Germany
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Frank Schaeffel
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| | - Katrin Franke
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152 Planegg-Martinsried, Germany; Bernstein Centre for Computational Neuroscience, 82152 Planegg-Martinsried, Germany.
| | - Thomas Euler
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany; Centre for Integrative Neuroscience (CIN), University of Tübingen, 72076 Tübingen, Germany; Bernstein Centre for Computational Neuroscience, 72076 Tübingen, Germany.
| |
Collapse
|
20
|
Nuthmann A, Clayden AC, Fisher RB. The effect of target salience and size in visual search within naturalistic scenes under degraded vision. J Vis 2021; 21:2. [PMID: 33792616 PMCID: PMC8024777 DOI: 10.1167/jov.21.4.2] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We address two questions concerning eye guidance during visual search in naturalistic scenes. First, search has been described as a task in which visual salience is unimportant. Here, we revisit this question by using a letter-in-scene search task that minimizes any confounding effects that may arise from scene guidance. Second, we investigate how important the different regions of the visual field are for different subprocesses of search (target localization, verification). In Experiment 1, we manipulated both the salience (low vs. high) and the size (small vs. large) of the target letter (a "T"), and we implemented a foveal scotoma (radius: 1°) in half of the trials. In Experiment 2, observers searched for high- and low-salience targets either with full vision or with a central or peripheral scotoma (radius: 2.5°). In both experiments, we found main effects of salience with better performance for high-salience targets. In Experiment 1, search was faster for large than for small targets, and high-salience helped more for small targets. When searching with a foveal scotoma, performance was relatively unimpaired regardless of the target's salience and size. In Experiment 2, both visual-field manipulations led to search time costs, but the peripheral scotoma was much more detrimental than the central scotoma. Peripheral vision proved to be important for target localization, and central vision for target verification. Salience affected eye movement guidance to the target in both central and peripheral vision. Collectively, the results lend support for search models that incorporate salience for predicting eye-movement behavior.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, University of Kiel, Germany.,Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK., http://orcid.org/0000-0003-3338-3434
| | - Adam C Clayden
- School of Engineering, Arts, Science and Technology, University of Suffolk, UK.,Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK.,
| | | |
Collapse
|
21
|
Nilsson DE, Smolka J. Quantifying biologically essential aspects of environmental light. J R Soc Interface 2021; 18:20210184. [PMID: 33906390 PMCID: PMC8086911 DOI: 10.1098/rsif.2021.0184] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 04/07/2021] [Indexed: 12/18/2022] Open
Abstract
Quantifying and comparing light environments are crucial for interior lighting, architecture and visual ergonomics. Yet, current methods only catch a small subset of the parameters that constitute a light environment, and rarely account for the light that reaches the eye. Here, we describe a new method, the environmental light field (ELF) method, which quantifies all essential features that characterize a light environment, including important aspects that have previously been overlooked. The ELF method uses a calibrated digital image sensor with wide-angle optics to record the radiances that would reach the eyes of people in the environment. As a function of elevation angle, it quantifies the absolute photon flux, its spectral composition in red-green-blue resolution as well as its variation (contrast-span). Together these values provide a complete description of the factors that characterize a light environment. The ELF method thus offers a powerful and convenient tool for the assessment and comparison of light environments. We also present a graphic standard for easy comparison of light environments, and show that different natural and artificial environments have characteristic distributions of light.
Collapse
Affiliation(s)
- Dan-E. Nilsson
- Lund Vision Group, Department of Biology, Lund University, Sölvegatan 35, 22362 Lund, Sweden
| | - Jochen Smolka
- Lund Vision Group, Department of Biology, Lund University, Sölvegatan 35, 22362 Lund, Sweden
| |
Collapse
|
22
|
Qian J, Kong B. Research on Global Contrast Calculation Considering Color Differences. IMAGE AND GRAPHICS TECHNOLOGIES AND APPLICATIONS 2021:189-200. [DOI: 10.1007/978-981-16-7189-0_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
23
|
Rifai K, Habtegiorgis SW, Erlenwein C, Wahl S. Motion-form interaction: Motion and form aftereffects induced by distorted static natural scenes. J Vis 2020; 20:10. [PMID: 33325995 PMCID: PMC7745598 DOI: 10.1167/jov.20.13.10] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Spatially varying distortions (SVDs) are common artifacts of spectacles like progressive additional lenses (PALs). To habituate to distortions of PALs, the visual system has to adapt to distortion-induced image alterations, termed skew adaptation. But how this visual adjustment is achieved is largely unknown. This study examines the properties of visual adaptation to distortions of PALs in natural scenes. The visual adaptation in response to altered form and motion features of the natural stimuli were probed in two different psychophysical experiments. Observers were exposed to distortions in natural images, and form and motion aftereffects were tested subsequently in a constant stimuli procedure where subjects were asked to judge the skew, or the motion direction of an according test stimulus. Exposure to skewed natural stimuli induced a shift in perceived undistorted form as well as motion direction, when viewing distorted dynamic natural scenes, and also after exposure to static distorted natural images. Therefore, skew adaptation occurred in form and motion for dynamic visual scenes as well as static images. Thus, specifically in the condition of static skewed images and the test feature of motion direction, cortical interactions between motion-form processing presumably contributed to the adaptation process. In a nutshell, interfeature cortical interactions constituted the adaptation process to distortion of PALs. Thus, comprehensive investigation of adaptation to distortions of PALs would benefit from taking into account content richness of the stimuli to be used, like natural images.
Collapse
Affiliation(s)
- Katharina Rifai
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany.,Carl Zeiss Vision International GmbH, Aalen, Germany.,
| | | | - Caroline Erlenwein
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany.,
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany.,Carl Zeiss Vision International GmbH, Aalen, Germany.,
| |
Collapse
|
24
|
Abstract
Individuals have the ability to extract summary statistics from multiple items presented simultaneously. However, it is unclear yet whether we have insight into the process of ensemble coding. The aim of this study was to investigate metacognition about average face perception. Participants saw a group of four faces presented for 2 s or 5 s, and then they were asked to judge whether the following test face was present in the previous set (Experiment 1), or whether the test face was the average of the four member faces (Experiment 2). After each response, participants rated their confidence. Replicating previous findings, there was substantial endorsement for the average face derived from the four member faces in Experiment 1, even though it was not present in the set. When judging faces that had been presented in the set, confidence correlated positively with accuracy, providing evidence for metacognitive awareness of previously studied faces. Importantly, there was a negative confidence-accuracy relationship for judging average faces when duration was 2 s, and a near-zero relationship when duration was 5 s. By contrast, when the average face had to be identified explicitly in Experiment 2, performance was above chance level and there was a positive correlation between confidence and accuracy. These results suggest that people have metacognitive awareness about average face perception when averaging is required explicitly, but they lack insight into the averaging process when member identification is required.
Collapse
|
25
|
van den Berg CP, Hollenkamp M, Mitchell LJ, Watson EJ, Green NF, Marshall NJ, Cheney KL. More than noise: context-dependent luminance contrast discrimination in a coral reef fish ( Rhinecanthus aculeatus). J Exp Biol 2020; 223:jeb232090. [PMID: 32967998 DOI: 10.1242/jeb.232090] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 09/11/2020] [Indexed: 01/19/2023]
Abstract
Achromatic (luminance) vision is used by animals to perceive motion, pattern, space and texture. Luminance contrast sensitivity thresholds are often poorly characterised for individual species and are applied across a diverse range of perceptual contexts using over-simplified assumptions of an animal's visual system. Such thresholds are often estimated using the receptor noise limited model (RNL). However, the suitability of the RNL model to describe luminance contrast perception remains poorly tested. Here, we investigated context-dependent luminance discrimination using triggerfish (Rhinecanthus aculeatus) presented with large achromatic stimuli (spots) against uniform achromatic backgrounds of varying absolute and relative contrasts. 'Dark' and 'bright' spots were presented against relatively dark and bright backgrounds. We found significant differences in luminance discrimination thresholds across treatments. When measured using Michelson contrast, thresholds for bright spots on a bright background were significantly higher than for other scenarios, and the lowest threshold was found when dark spots were presented on dark backgrounds. Thresholds expressed in Weber contrast revealed lower thresholds for spots darker than their backgrounds, which is consistent with the literature. The RNL model was unable to estimate threshold scaling across scenarios as predicted by the Weber-Fechner law, highlighting limitations in the current use of the RNL model to quantify luminance contrast perception. Our study confirms that luminance contrast discrimination thresholds are context dependent and should therefore be interpreted with caution.
Collapse
Affiliation(s)
- Cedric P van den Berg
- School of Biological Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Michelle Hollenkamp
- Department of Ecology and Evolutionary Biology, University of Colorado Boulder, Boulder, CO 80309, USA
| | - Laurie J Mitchell
- School of Biological Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Erin J Watson
- School of Biological Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Naomi F Green
- School of Biological Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
| | - N Justin Marshall
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Karen L Cheney
- School of Biological Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| |
Collapse
|
26
|
Clayden AC, Fisher RB, Nuthmann A. On the relative (un)importance of foveal vision during letter search in naturalistic scenes. Vision Res 2020; 177:41-55. [PMID: 32957035 DOI: 10.1016/j.visres.2020.07.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 07/10/2020] [Accepted: 07/13/2020] [Indexed: 11/26/2022]
Abstract
The importance of high-acuity foveal vision to visual search can be assessed by denying foveal vision using the gaze-contingent Moving Mask technique. Foveal vision was necessary to attain normal performance when searching for a target letter in alphanumeric displays, Perception & Psychophysics, 62 (2000) 576-585. In contrast, foveal vision was not necessary to correctly locate and identify medium-sized target objects in natural scenes, Journal of Experimental Psychology: Human Perception and Performance, 40 (2014) 342-360. To explore these task differences, we used grayscale pictures of real-world scenes which included a target letter (Experiment 1: T, Experiment 2: T or L). To reduce between-scene variability with regard to target salience, we developed the Target Embedding Algorithm (T.E.A.) to place the letter in a location for which there was a median change in local contrast when inserting the letter into the scene. The presence or absence of foveal vision was crossed with four target sizes. In both experiments, search performance decreased for smaller targets, and was impaired when searching the scene without foveal vision. For correct trials, the process of target localization remained completely unimpaired by the foveal scotoma, but it took longer to accept the target. We reasoned that the size of the target may affect the importance of foveal vision to the task, but the present data remain ambiguous. In summary, the data highlight the importance of extrafoveal vision for target localization, and the importance of foveal vision for target verification during letter-in-scene search.
Collapse
Affiliation(s)
- Adam C Clayden
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK; School of Engineering, Arts, Science and Technology, University of Suffolk, UK
| | | | - Antje Nuthmann
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK; Institute of Psychology, University of Kiel, Germany.
| |
Collapse
|
27
|
Bourgin J, Silvert L, Borg C, Morand A, Sauvée M, Moreaud O, Hot P. Impact of emotionally negative information on attentional processes in normal aging and Alzheimer's disease. Brain Cogn 2020; 145:105624. [PMID: 32932107 DOI: 10.1016/j.bandc.2020.105624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 08/26/2020] [Accepted: 09/01/2020] [Indexed: 01/08/2023]
Abstract
Impairments of emotional processing have been reported in Alzheimer's disease (AD), consistently with the existence of early amygdala atrophy in the pathology. In this study, we hypothesized that patients with AD might show a deficit of orientation toward emotional information under conditions of visual search. Eighteen patients with AD, 24 age-matched controls, and 35 young controls were eye-tracked while they performed a visual search task on a computer screen. The target was a vehicle with implicit (negative or neutral) emotional content, presented concurrently with one, three, or five non-vehicle neutral distractors. The task was to find the target and to report whether a break in the target frame was on the left or on the right side. Both control groups detected negative targets more efficiently than they detected neutral targets, showing facilitated engagement toward negative information. In contrast, patients with AD showed no influence of emotional information on engagement delays. However, all groups reported the frame break location more slowly for negative than for neutral targets (after accounting for the last fixation delay), showing a more difficult disengagement from negative information. These findings are the first to highlight a selective lack of emotional influence on engagement processes in patients with AD. The involvement of amygdala alterations in this behavioral impairment remains to be investigated.
Collapse
Affiliation(s)
- Jessica Bourgin
- Université Grenoble Alpes, Université Savoie Mont Blanc, CNRS UMR 5105, Laboratoire de Psychologie et Neurocognition (LPNC), 38000 Grenoble, France
| | - Laetitia Silvert
- Université Clermont Auvergne, UCA-CNRS UMR 6024, Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), 63100 Clermont-Ferrand, France
| | - Céline Borg
- Département de Neurologie, CHU Saint-Etienne, 42270 Saint-Priest-en-Jarez, France
| | - Alexandrine Morand
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, Inserm, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, GIP Cyceron, 14000 Caen, France
| | - Mathilde Sauvée
- Université Grenoble Alpes, Université Savoie Mont Blanc, CNRS UMR 5105, Laboratoire de Psychologie et Neurocognition (LPNC), 38000 Grenoble, France; Centre Mémoire de Ressources et de Recherche, Pôle de Psychiatrie et Neurologie, CHU Grenoble, 38000 Grenoble, France
| | - Olivier Moreaud
- Université Grenoble Alpes, Université Savoie Mont Blanc, CNRS UMR 5105, Laboratoire de Psychologie et Neurocognition (LPNC), 38000 Grenoble, France; Centre Mémoire de Ressources et de Recherche, Pôle de Psychiatrie et Neurologie, CHU Grenoble, 38000 Grenoble, France
| | - Pascal Hot
- Université Grenoble Alpes, Université Savoie Mont Blanc, CNRS UMR 5105, Laboratoire de Psychologie et Neurocognition (LPNC), 38000 Grenoble, France; Institut Universitaire de France, France.
| |
Collapse
|
28
|
Abstract
An ideal observer is a theoretical model observer that performs a specific sensory-perceptual task optimally, making the best possible use of the available information given physical and biological constraints. An image-computable ideal observer (pixels in, estimates out) is a particularly powerful type of ideal observer that explicitly models the flow of visual information from the stimulus-encoding process to the eventual decoding of a sensory-perceptual estimate. Image-computable ideal observer analyses underlie some of the most important results in vision science. However, most of what we know from ideal observers about visual processing and performance derives from relatively simple tasks and relatively simple stimuli. This review describes recent efforts to develop image-computable ideal observers for a range of tasks with natural stimuli and shows how these observers can be used to predict and understand perceptual and neurophysiological performance. The reviewed results establish principled links among models of neural coding, computational methods for dimensionality reduction, and sensory-perceptual performance in tasks with natural stimuli.
Collapse
Affiliation(s)
- Johannes Burge
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; .,Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.,Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
29
|
Cortical Thickness and Natural Scene Recognition in the Child's Brain. Brain Sci 2020; 10:brainsci10060329. [PMID: 32481756 PMCID: PMC7349156 DOI: 10.3390/brainsci10060329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 05/23/2020] [Accepted: 05/26/2020] [Indexed: 12/02/2022] Open
Abstract
Visual scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information, whereas high spatial frequencies (HSF) subsequently carry information about fine details. The present magnetic resonance imaging study investigated how cortical thickness covaried with LSF/HSF processing abilities in ten-year-old children and adults. Participants indicated whether natural scenes that were filtered in either LSF or HSF represented outdoor or indoor scenes, while reaction times (RTs) and accuracy measures were recorded. In adults, faster RTs for LSF and HSF images were consistently associated with a thicker cortex (parahippocampal cortex, middle frontal gyrus, and precentral and insula regions for LSF; parahippocampal cortex and fronto-marginal and supramarginal gyri for HSF). On the other hand, in children, faster RTs for HSF were associated with a thicker cortex (posterior cingulate, supramarginal and calcarine cortical regions), whereas faster RTs for LSF were associated with a thinner cortex (subcallosal and insula regions). Increased cortical thickness in adults and children could correspond to an expansion mechanism linked to visual scene processing efficiency. In contrast, lower cortical thickness associated with LSF efficiency in children could correspond to a pruning mechanism reflecting an ongoing maturational process, in agreement with the view that LSF efficiency continues to be refined during childhood. This differing pattern between children and adults appeared to be particularly significant in anterior regions of the brain, in line with the proposed existence of a postero-anterior gradient of brain development. Taken together, our results highlight the dynamic brain processes that allow children and adults to perceive a visual natural scene in a coherent way.
Collapse
|
30
|
Truong TV, Holland DB, Madaan S, Andreev A, Keomanee-Dizon K, Troll JV, Koo DES, McFall-Ngai MJ, Fraser SE. High-contrast, synchronous volumetric imaging with selective volume illumination microscopy. Commun Biol 2020; 3:74. [PMID: 32060411 PMCID: PMC7021898 DOI: 10.1038/s42003-020-0787-6] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Accepted: 01/15/2020] [Indexed: 12/31/2022] Open
Abstract
Light-field fluorescence microscopy uniquely provides fast, synchronous volumetric imaging by capturing an extended volume in one snapshot, but often suffers from low contrast due to the background signal generated by its wide-field illumination strategy. We implemented light-field-based selective volume illumination microscopy (SVIM), where illumination is confined to only the volume of interest, removing the background generated from the extraneous sample volume, and dramatically enhancing the image contrast. We demonstrate the capabilities of SVIM by capturing cellular-resolution 3D movies of flowing bacteria in seawater as they colonize their squid symbiotic partner, as well as of the beating heart and brain-wide neural activity in larval zebrafish. These applications demonstrate the breadth of imaging applications that we envision SVIM will enable, in capturing tissue-scale 3D dynamic biological systems at single-cell resolution, fast volumetric rates, and high contrast to reveal the underlying biology.
Collapse
Affiliation(s)
- Thai V Truong
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA.
- Molecular and Computational Biology Section, University of Southern California, Los Angeles, CA, 90089, USA.
| | - Daniel B Holland
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA
| | - Sara Madaan
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| | - Andrey Andreev
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| | - Kevin Keomanee-Dizon
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA
| | - Josh V Troll
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA
| | - Daniel E S Koo
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| | - Margaret J McFall-Ngai
- Pacific Biosciences Research Center, University of Hawaii at Manoa, Honolulu, HI, 96822, USA
| | - Scott E Fraser
- Translational Imaging Center, University of Southern California, Los Angeles, CA, 90089, USA.
- Molecular and Computational Biology Section, University of Southern California, Los Angeles, CA, 90089, USA.
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA.
| |
Collapse
|
31
|
Abramjan A, Baranová V, Frýdlová P, Landová E, Frynta D. Ultraviolet reflectance and pattern properties in leopard geckos (Eublepharis macularius). Behav Processes 2020; 173:104060. [PMID: 31991157 DOI: 10.1016/j.beproc.2020.104060] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2019] [Revised: 12/29/2019] [Accepted: 01/20/2020] [Indexed: 12/31/2022]
Abstract
Complex visual signaling through various combinations of colors and patterns has been well documented in a number of diurnal reptiles. However, there are many nocturnal species with highly sensitive vision, being able to discriminate colors in night conditions, as was shown in geckos. Because of their sensitivity to chromatic signals, including UV (ultraviolet), they may have potential hidden features in their coloration, which may play role in intraspecific communication (e.g. mate choice) or interspecific signals (e.g. antipredatory function). We explored this hypothesis in nocturnal Leopard geckos (Eublepharis macularius), a species using visual signals in both antipredation defense and courtship, having ontogenetic color change accompanied by a shift in behavior. We used UV photography and visual modeling in order to compare various aspects of their coloration (luminance, contrast, color proportions) between sexes, age groups and populations. We found that Leopard geckos have considerable UV reflectance in white patches on their tails (and on the head in juveniles). Though, no prominent differences were detected in their coloration between various groups. We hypothesize that the limitation of UV reflectance to the head and tail, which are both actively displayed during defense, especially in juveniles, might potentially boost the effect of antipredation signaling.
Collapse
Affiliation(s)
- Andran Abramjan
- Department of Zoology, Faculty of Science, Charles University, Viničná 7, CZ-12844, Prague, Czech Republic
| | - Veronika Baranová
- Department of Zoology, Faculty of Science, Charles University, Viničná 7, CZ-12844, Prague, Czech Republic
| | - Petra Frýdlová
- Department of Zoology, Faculty of Science, Charles University, Viničná 7, CZ-12844, Prague, Czech Republic
| | - Eva Landová
- Department of Zoology, Faculty of Science, Charles University, Viničná 7, CZ-12844, Prague, Czech Republic.
| | - Daniel Frynta
- Department of Zoology, Faculty of Science, Charles University, Viničná 7, CZ-12844, Prague, Czech Republic
| |
Collapse
|
32
|
Sanz Diez P, Ohlendorf A, Schaeffel F, Wahl S. Effect of spatial filtering on accommodation. Vision Res 2019; 164:62-68. [PMID: 31356834 DOI: 10.1016/j.visres.2019.07.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 07/12/2019] [Accepted: 07/22/2019] [Indexed: 11/26/2022]
Abstract
The purpose of this study was to develop and test a new method that uses natural images to investigate the influence of their spatial frequency content on the accommodation response (AR). Furthermore, the minimum spatial frequency content was determined that was necessary to induce an AR. Blur of the images was manipulated digitally in the Fourier domain by filtering with a Sinc function. Fourteen young subjects participated in the experiment. A 2-step procedure was used: (1) verifying that a high amount of Sinc-blur does not evoke accommodation, (2) increasing the width of the Sinc-blur filter in logarithmic steps until an AR was evoked. AR was continuously monitored using eccentric infrared photorefraction at 60 Hz sampling rate under monocular viewing conditions. Under condition (1), Sinc-blur of λ = 1 cpd did not evoke accommodation, while under condition (2) an average (mean ± standard deviation) Sinc-blur of λ = 5.57 ± 4.67 cpd (median: 4 cpd, interquartile range: 2-7 cpd) evoked accommodation. Dividing the subjects into myopes and emmetropes revealed that the myopic group required higher amounts of λ (higher spatial frequencies) to stimulate their accommodation (mean λ = 9.33 ± 4.99 cpd, for myopes; and mean λ = 2.75 ± 0.97 cpd, for emmetropes). Our results support the notion that the AR is most effectively stimulated at mid-spatial frequencies and that myopes may require higher spatial frequencies to elicit a comparable AR.
Collapse
Affiliation(s)
- Pablo Sanz Diez
- Carl Zeiss Vision International GmbH, Turnstrasse 27, 73430 Aalen, Germany; Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Elfriede-Aulhorn-Straße 7, 72076 Tuebingen, Germany.
| | - Arne Ohlendorf
- Carl Zeiss Vision International GmbH, Turnstrasse 27, 73430 Aalen, Germany; Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Elfriede-Aulhorn-Straße 7, 72076 Tuebingen, Germany
| | - Frank Schaeffel
- Section of Neurobiology of the Eye, Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Elfriede-Aulhorn-Straße 7, 72076 Tuebingen, Germany
| | - Siegfried Wahl
- Carl Zeiss Vision International GmbH, Turnstrasse 27, 73430 Aalen, Germany; Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Elfriede-Aulhorn-Straße 7, 72076 Tuebingen, Germany
| |
Collapse
|
33
|
Chen B, Mundy M, Tsuchiya N. Metacognitive Accuracy Improves With the Perceptual Learning of a Low- but Not High-Level Face Property. Front Psychol 2019; 10:1712. [PMID: 31396138 PMCID: PMC6667671 DOI: 10.3389/fpsyg.2019.01712] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 07/09/2019] [Indexed: 12/02/2022] Open
Abstract
Experience with visual stimuli can improve their perceptual performance, a phenomenon termed visual perceptual learning (VPL). VPL has been found to improve metacognitive measures, suggesting increased conscious accessibility to the knowledge supporting perceptual decision-making. However, such studies have largely failed to control objective task accuracy, which typically correlates with metacognition. Here, using a staircase method to control this confound, we investigated whether VPL improves the metacognitive accuracy of perceptual decision-making. Across 3 days, subjects were trained to discriminate faces based on their high-level identity or low-level contrast. Holding objective accuracy constant across training days, perceptual thresholds decreased in both tasks, demonstrating VPL in our protocol. However, whilemetacognitive accuracy was not affected by face contrast VPL, it was decreased by face identity VPL. Our findings couldbe parsimoniously explained by a dual-stage signal detection theory-based model involving an initial perceptual decision-making stage and a second confidence judgment stage. Within this model, internal noise reductions for both stages accounts for our face contrast VPL result, while only first stage noise reductions accounts for our face identity VPL result. In summary, we found evidence suggesting that conscious knowledge accessibility was improved by the VPL of face contrast but not face identity.
Collapse
Affiliation(s)
- Benjamin Chen
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, VIC, Australia
| | - Matthew Mundy
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, VIC, Australia.,Monash Institute of Cognitive and Clinical Neuroscience, Monash University, Melbourne, VIC, Australia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Melbourne, VIC, Australia.,Monash Institute of Cognitive and Clinical Neuroscience, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
34
|
Image statistics of the environment surrounding freely behaving hoverflies. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2019; 205:373-385. [PMID: 30937518 PMCID: PMC6579776 DOI: 10.1007/s00359-019-01329-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 02/12/2019] [Accepted: 03/14/2019] [Indexed: 12/04/2022]
Abstract
Natural scenes are not as random as they might appear, but are constrained in both space and time. The 2-dimensional spatial constraints can be described by quantifying the image statistics of photographs. Human observers perceive images with naturalistic image statistics as more pleasant to view, and both fly and vertebrate peripheral and higher order visual neurons are tuned to naturalistic image statistics. However, for a given animal, what is natural differs depending on the behavior, and even if we have a broad understanding of image statistics, we know less about the scenes relevant for particular behaviors. To mitigate this, we here investigate the image statistics surrounding Episyrphus balteatus hoverflies, where the males hover in sun shafts created by surrounding trees, producing a rich and dense background texture and also intricate shadow patterns on the ground. We quantified the image statistics of photographs of the ground and the surrounding panorama, as the ventral and lateral visual field is particularly important for visual flight control, and found differences in spatial statistics in photos where the hoverflies were hovering compared to where they were flying. Our results can, in the future, be used to create more naturalistic stimuli for experimenter-controlled experiments in the laboratory.
Collapse
|
35
|
Harrison WJ. The (In)visibility of Groomed Ski Runs. Iperception 2019; 10:2041669519842895. [PMID: 31019672 PMCID: PMC6463239 DOI: 10.1177/2041669519842895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 03/15/2019] [Indexed: 11/17/2022] Open
Abstract
I analyse the visibility of "groomed" ski runs under different lighting conditions. A model of human contrast sensitivity predicts that the spatial period of groomed snow may render it invisible in the shade or on overcast days. I confirm this prediction with visual demonstrations and make a suggestion to improve visibility.
Collapse
Affiliation(s)
- William J. Harrison
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| |
Collapse
|
36
|
Weierich MR, Kleshchova O, Rieder JK, Reilly DM. The Complex Affective Scene Set (COMPASS): Solving the Social Content Problem in Affective Visual Stimulus Sets. COLLABRA: PSYCHOLOGY 2019. [DOI: 10.1525/collabra.256] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Social information, including faces and human bodies, holds special status in visual perception generally, and in visual processing of complex arrays such as real-world scenes specifically. To date, unbalanced representation of social compared with nonsocial information in affective stimulus sets has limited the clear determination of effects as attributable to, or independent of, social content. We present the Complex Affective Scene Set (COMPASS), a set of 150 social and 150 nonsocial naturalistic affective scenes that are balanced across valence and arousal dimensions. Participants (n = 847) rated valence and arousal for each scene. The normative ratings for the 300 images together, and separately by social content, show the canonical boomerang shape that confirms coverage of much of the affective circumplex. COMPASS adds uniquely to existing visual stimulus sets by balancing social content across affect dimensions, thereby eliminating a potentially major confound across affect categories (i.e., combinations of valence and arousal). The robust special status of social information persisted even after balancing of affect categories and was observed in slower rating response times for social versus nonsocial stimuli. The COMPASS images also match the complexity of real-world environments by incorporating stimulus competition within each scene. Together, these attributes facilitate the use of the stimulus set in particular for disambiguating the effects of affect and social content for a range of research questions and populations.
Collapse
Affiliation(s)
- Mariann R. Weierich
- Department of Psychology, The University of Nevada Reno, Reno, NV, US
- Department of Psychology, Hunter College, The City University of New York, New York, NY, US
- The Graduate Center, The City University of New York, New York, NY, US
| | - Olena Kleshchova
- Department of Psychology, Hunter College, The City University of New York, New York, NY, US
- The Graduate Center, The City University of New York, New York, NY, US
| | - Jenna K. Rieder
- Department of Psychology, Hunter College, The City University of New York, New York, NY, US
- The Graduate Center, The City University of New York, New York, NY, US
- College of Humanities and Sciences, Thomas Jefferson University, Philadelphia, PA, US
| | - Danielle M. Reilly
- Department of Psychology, Hunter College, The City University of New York, New York, NY, US
| |
Collapse
|
37
|
Staaks D, Olynick DL, Rangelow IW, Altoe MVP. Polymer-metal coating for high contrast SEM cross sections at the deep nanoscale. NANOSCALE 2018; 10:22884-22895. [PMID: 30488943 DOI: 10.1039/c8nr06669h] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In scanning electron microscopy (SEM), imaging nanoscale features by means of the cross-sectioning method becomes increasingly challenging with shrinking feature sizes. However, obtaining high quality images, at high magnification, is crucial for critical dimension and patterned feature evaluation. Therefore, in this work, we present a new sample preparation method for high performance cross-sectional secondary electron (SE) imaging, targeting features at the deep nanoscale and into the sub-10 nm regime. Different coating architectures including conductive and non-conductive polymer, carbon and metal are compared on their ability to discern etching feature profiles and materials interfaces of densely packed nano-patterned features. A stacked coating of polymer and metal produced better visibility mainly due to enhancement of contrast between feature and background. Contrast was evaluated by using histograms of intensity of gray levels directly derived from SE images, obtained by the SE in-lens detector. In polymer-metal coatings (PMC), optimization of contrast is explored by varying the thickness of the metal layer and results are discussed in terms of the effectiveness of the metal layer in reducing the escape of secondary electrons (SE) generated in the polymer layer and feature. Other advantages of PMCs are their cleanroom compatibility and ease of coating removal.
Collapse
Affiliation(s)
- Daniel Staaks
- Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, 94720, USA.
| | | | | | | |
Collapse
|
38
|
Howard SR, Shrestha M, Schramme J, Garcia JE, Avarguès-Weber A, Greentree AD, Dyer AG. Honeybees prefer novel insect-pollinated flower shapes over bird-pollinated flower shapes. Curr Zool 2018; 65:457-465. [PMID: 31413718 PMCID: PMC6688580 DOI: 10.1093/cz/zoy095] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 12/04/2018] [Indexed: 11/18/2022] Open
Abstract
Plant–pollinator interactions have a fundamental influence on flower evolution. Flower color signals are frequently tuned to the visual capabilities of important pollinators such as either bees or birds, but far less is known about whether flower shape influences the choices of pollinators. We tested European honeybee Apis mellifera preferences using novel achromatic (gray-scale) images of 12 insect-pollinated and 12 bird-pollinated native Australian flowers in Germany; thus, avoiding influences of color, odor, or prior experience. Independent bees were tested with a number of parameterized images specifically designed to assess preferences for size, shape, brightness, or the number of flower-like shapes present in an image. We show that honeybees have a preference for visiting images of insect-pollinated flowers and such a preference is most-likely mediated by holistic information rather than by individual image parameters. Our results indicate angiosperms have evolved flower shapes which influence the choice behavior of important pollinators, and thus suggest spatial achromatic flower properties are an important part of visual signaling for plant–pollinator interactions.
Collapse
Affiliation(s)
- Scarlett R Howard
- Bio-inspired Digital Sensing (BIDS) Lab, School of Media and Communication, RMIT University, Melbourne, Victoria 3000, Australia
| | - Mani Shrestha
- Bio-inspired Digital Sensing (BIDS) Lab, School of Media and Communication, RMIT University, Melbourne, Victoria 3000, Australia.,Faculty of Information Technology, Monash University, Melbourne, Victoria 3800, Australia
| | - Juergen Schramme
- Institute of Developmental Biology and Neurobiology (iDn), Johannes Gutenberg University, Mainz 55122, Germany
| | - Jair E Garcia
- Bio-inspired Digital Sensing (BIDS) Lab, School of Media and Communication, RMIT University, Melbourne, Victoria 3000, Australia
| | - Aurore Avarguès-Weber
- Centre de Recherches sur la Cognition Animale, Centre de Biologie Intégrative (CBI), Université de Toulouse, CNRS, UPS, Toulouse 31400, France
| | - Andrew D Greentree
- ARC Centre of Excellence for Nanoscale BioPhotonics, School of Science, RMIT University, Melbourne, Victoria 3000, Australia
| | - Adrian G Dyer
- Bio-inspired Digital Sensing (BIDS) Lab, School of Media and Communication, RMIT University, Melbourne, Victoria 3000, Australia.,Department of Physiology, Monash University, Clayton, Victoria 3800, Australia
| |
Collapse
|
39
|
Ferrero A, Velázquez JL, Perales E, Campos J, Martínez Verdú FM. Definition of a measurement scale of graininess from reflectance and visual measurements. OPTICS EXPRESS 2018; 26:30116-30127. [PMID: 30469891 DOI: 10.1364/oe.26.030116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Accepted: 08/25/2018] [Indexed: 06/09/2023]
Abstract
Effect pigments in coatings produce eye-catching colour and texture effects and are widely used in automotive, cosmetics, coatings, inks, flooring, textile or decoration. One of these texture effects is graininess, which is the perceived texture exhibited when the effect coating is observed under diffuse illumination. To date there is not a standard procedure to measure graininess from reflectance measurements. The objective of this work is to propose a methodology for traceable graininess measurements, similarly as it was proposed for colour in 1931. In this article, the relevant reflectance-based quantities are clearly defined, and a formal relation with data from visual experiments is given. This methodology would allow a measurement scale of graininess and a difference formula to be agreed once conclusive visual data become available.
Collapse
|
40
|
Habtegiorgis SW, Rifai K, Lappe M, Wahl S. Experience-dependent long-term facilitation of skew adaptation. J Vis 2018; 18:7. [DOI: 10.1167/18.9.7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Katharina Rifai
- Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Markus Lappe
- Institute of Psychology, University of Muenster, Muenster, Germany
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
41
|
Birman D, Gardner JL. A quantitative framework for motion visibility in human cortex. J Neurophysiol 2018; 120:1824-1839. [PMID: 29995608 DOI: 10.1152/jn.00433.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Despite the central use of motion visibility to reveal the neural basis of perception, perceptual decision making, and sensory inference there exists no comprehensive quantitative framework establishing how motion visibility parameters modulate human cortical response. Random-dot motion stimuli can be made less visible by reducing image contrast or motion coherence, or by shortening the stimulus duration. Because each of these manipulations modulates the strength of sensory neural responses they have all been extensively used to reveal cognitive and other nonsensory phenomena such as the influence of priors, attention, and choice-history biases. However, each of these manipulations is thought to influence response in different ways across different cortical regions and a comprehensive study is required to interpret this literature. Here, human participants observed random-dot stimuli varying across a large range of contrast, coherence, and stimulus durations as we measured blood-oxygen-level dependent responses. We developed a framework for modeling these responses that quantifies their functional form and sensitivity across areas. Our framework demonstrates the sensitivity of all visual areas to each parameter, with early visual areas V1-V4 showing more parametric sensitivity to changes in contrast and V3A and the human middle temporal area to coherence. Our results suggest that while motion contrast, coherence, and duration share cortical representation, they are encoded with distinct functional forms and sensitivity. Thus, our quantitative framework serves as a reference for interpretation of the vast perceptual literature manipulating these parameters and shows that different manipulations of visibility will have different effects across human visual cortex and need to be interpreted accordingly. NEW & NOTEWORTHY Manipulations of motion visibility have served as a key tool for understanding the neural basis for visual perception. Here we measured human cortical response to changes in visibility across a comprehensive range of motion visibility parameters and modeled these with a quantitative framework. Our quantitative framework can be used as a reference for linking human cortical response to perception and underscores that different manipulations of motion visibility can have greatly different effects on cortical representation.
Collapse
Affiliation(s)
- Daniel Birman
- Department of Psychology, Stanford University , Stanford, California
| | - Justin L Gardner
- Department of Psychology, Stanford University , Stanford, California
| |
Collapse
|
42
|
Wang Y, Zhu Z, Chen B, Fang F. Perceptual learning and recognition confusion reveal the underlying relationships among the six basic emotions. Cogn Emot 2018; 33:754-767. [PMID: 29962270 DOI: 10.1080/02699931.2018.1491831] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The six basic emotions (disgust, anger, fear, happiness, sadness, and surprise) have long been considered discrete categories that serve as the primary units of the emotion system. Yet recent evidence indicated underlying connections among them. Here we tested the underlying relationships among the six basic emotions using a perceptual learning procedure. This technique has the potential of causally changing participants' emotion detection ability. We found that training on detecting a facial expression improved the performance not only on the trained expression but also on other expressions. Such a transfer effect was consistently demonstrated between disgust and anger detection as well as between fear and surprise detection in two experiments (Experiment 1A, n = 70; Experiment 1B, n = 42). Notably, training on any of the six emotions could improve happiness detection, while sadness detection could only be improved by training on sadness itself, suggesting the uniqueness of happiness and sadness. In an emotion recognition test using a large sample of Chinese participants (n = 1748), the confusion between disgust and anger as well as between fear and surprise was further confirmed. Taken together, our study demonstrates that the "basic" emotions share some common psychological components, which might be the more basic units of the emotion system.
Collapse
Affiliation(s)
- Yingying Wang
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China
| | - Zijian Zhu
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China
| | - Biqing Chen
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China
| | - Fang Fang
- a Academy for Advanced Interdisciplinary Studies, Peking-Tsinghua Center for Life Sciences, Peking University , Beijing , P. R. People's Republic of China.,b IDG/McGovern Institute for Brain Research, Peking University , Beijing , P. R. People's Republic of China.,c School of Psychological and Cognitive Sciences, Peking University , Beijing , P. R. People's Republic of China.,d Beijing Key Laboratory of Behavior and Mental Health, Peking University , Beijing , P. R. People's Republic of China.,e Key Laboratory of Machine Perception (Ministry of Education) , Peking University , Beijing , P. R. People's Republic of China
| |
Collapse
|
43
|
Huang PC, Dai YM. Binocular contrast-gain control for natural scenes: Image structure and phase alignment. Vision Res 2018; 146-147:18-31. [PMID: 29704536 DOI: 10.1016/j.visres.2018.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 02/12/2018] [Accepted: 02/12/2018] [Indexed: 10/17/2022]
Abstract
In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression.
Collapse
Affiliation(s)
- Pi-Chun Huang
- Department of Psychology, National Cheng Kung University, Tainan, Taiwan.
| | - Yu-Ming Dai
- Department of Psychology, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
44
|
Ji L, Pourtois G. Capacity limitations to extract the mean emotion from multiple facial expressions depend on emotion variance. Vision Res 2018; 145:39-48. [PMID: 29660371 DOI: 10.1016/j.visres.2018.03.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 01/23/2018] [Accepted: 03/11/2018] [Indexed: 10/17/2022]
Abstract
We examined the processing capacity and the role of emotion variance in ensemble representation for multiple facial expressions shown concurrently. A standard set size manipulation was used, whereby the sets consisted of 4, 8, or 16 morphed faces each uniquely varying along a happy-angry continuum (Experiment 1) or a neutral-happy/angry continuum (Experiments 2 & 3). Across the three experiments, we reduced the amount of emotion variance in the sets to explore the boundaries of this process. Participants judged the perceived average emotion from each set on a continuous scale. We computed and compared objective and subjective difference scores, using the morph units and post-experiment ratings, respectively. Results of the subjective scores were more consistent than the objective ones across the first two experiments where the variance was relatively large, and revealed each time that increasing set size led to a poorer averaging ability, suggesting capacity limitations in establishing ensemble representations for multiple facial expressions. However, when the emotion variance in the sets was reduced in Experiment 3, both subjective and objective scores remained unaffected by set size, suggesting that the emotion averaging process was unlimited in these conditions. Collectively, these results suggest that extracting mean emotion from a set composed of multiple faces depends on both structural (attentional) and stimulus-related effects.
Collapse
Affiliation(s)
- Luyan Ji
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium.
| | - Gilles Pourtois
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
45
|
Mares I, Smith ML, Johnson MH, Senju A. Revealing the neural time-course of direct gaze processing via spatial frequency manipulation of faces. Biol Psychol 2018; 135:76-83. [PMID: 29510183 DOI: 10.1016/j.biopsycho.2018.03.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2017] [Revised: 12/21/2017] [Accepted: 03/01/2018] [Indexed: 10/17/2022]
Abstract
Direct gaze is a powerful social cue signalling the attention of another person toward oneself. Here we investigated the relevance of low spatial frequency (LSF) and high spatial frequency (HSF) in facial cues for direct gaze processing. We identified two distinct peaks in the ERP response, the N170 and N240 components. These two components were related to different stimulus conditions and influenced by different spatial frequencies. In particular, larger N170 and N240 amplitudes were observed for direct gaze than for averted gaze, but only in the N240 component was this effect modulated by spatial frequency, where it was reliant in LSF information. By contrast, larger N170 and N240 components were observed for faces than for non-facial stimuli, but this effect was only modulated by spatial frequency in the N170 component, where it relied on HSF information. The present study highlights the existence of two functionally distinct components related to direct gaze processing.
Collapse
Affiliation(s)
- Inês Mares
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, Henry Wellcome Building, Malet Street, London WC1E 7HX, United Kingdom; Department of Psychological Sciences, Birkbeck, University of London, Birkbeck College, Malet Street, London WC1E 7HX, United Kingdom.
| | - Marie L Smith
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, Henry Wellcome Building, Malet Street, London WC1E 7HX, United Kingdom; Department of Psychological Sciences, Birkbeck, University of London, Birkbeck College, Malet Street, London WC1E 7HX, United Kingdom.
| | - Mark H Johnson
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, Henry Wellcome Building, Malet Street, London WC1E 7HX, United Kingdom.
| | - Atsushi Senju
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, Henry Wellcome Building, Malet Street, London WC1E 7HX, United Kingdom.
| |
Collapse
|
46
|
Maiello G, Kwon M, Bex PJ. Three-dimensional binocular eye-hand coordination in normal vision and with simulated visual impairment. Exp Brain Res 2018; 236:691-709. [PMID: 29299642 PMCID: PMC6693328 DOI: 10.1007/s00221-017-5160-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Accepted: 12/21/2017] [Indexed: 10/18/2022]
Abstract
Sensorimotor coupling in healthy humans is demonstrated by the higher accuracy of visually tracking intrinsically-rather than extrinsically-generated hand movements in the fronto-parallel plane. It is unknown whether this coupling also facilitates vergence eye movements for tracking objects in depth, or can overcome symmetric or asymmetric binocular visual impairments. Human observers were therefore asked to track with their gaze a target moving horizontally or in depth. The movement of the target was either directly controlled by the observer's hand or followed hand movements executed by the observer in a previous trial. Visual impairments were simulated by blurring stimuli independently in each eye. Accuracy was higher for self-generated movements in all conditions, demonstrating that motor signals are employed by the oculomotor system to improve the accuracy of vergence as well as horizontal eye movements. Asymmetric monocular blur affected horizontal tracking less than symmetric binocular blur, but impaired tracking in depth as much as binocular blur. There was a critical blur level up to which pursuit and vergence eye movements maintained tracking accuracy independent of blur level. Hand-eye coordination may therefore help compensate for functional deficits associated with eye disease and may be employed to augment visual impairment rehabilitation.
Collapse
Affiliation(s)
- Guido Maiello
- UCL Institute of Ophthalmology, University College London, 11-43 Bath Street, London, EC1V 9EL, UK.
- Department of Experimental Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Str.10F, 35394, Giessen, Germany.
| | - MiYoung Kwon
- Department of Ophthalmology, University of Alabama at Birmingham, 700 S. 18th Street, Birmingham, AL, 35294-0009, USA
| | - Peter J Bex
- Department of Psychology, Northeastern University, 360 Huntington Ave, Boston, MA, 02115, USA
| |
Collapse
|
47
|
Roux-Sibilon A, Rutgé F, Aptel F, Attye A, Guyader N, Boucart M, Chiquet C, Peyrin C. Scene and human face recognition in the central vision of patients with glaucoma. PLoS One 2018; 13:e0193465. [PMID: 29481572 PMCID: PMC5826536 DOI: 10.1371/journal.pone.0193465] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Accepted: 02/12/2018] [Indexed: 11/18/2022] Open
Abstract
Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test.
Collapse
Affiliation(s)
| | - Floriane Rutgé
- Department of Ophthalmology, University Hospital, Grenoble, France
| | - Florent Aptel
- Department of Ophthalmology, University Hospital, Grenoble, France
| | - Arnaud Attye
- Department of Neuroradiology and MRI, University Hospital, Grenoble, France
| | - Nathalie Guyader
- Université Grenoble Alpes, CNRS, GIPSA-Lab UMR 5210, Grenoble, France
| | - Muriel Boucart
- Université de Lille, CNRS, SCALab UMR 9193, Lille, France
| | | | | |
Collapse
|
48
|
Naturalness Preserved Image Enhancement Using a priori Multi-Layer Lightness Statistics. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:938-948. [PMID: 29200804 PMCID: PMC5708854 DOI: 10.1109/tip.2017.2771449] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Enhancement of non-uniformly illuminated images often suffers from over-enhancement and produces unnatural results. This paper presents a naturalness preserved enhancement method for non-uniformly illuminated images, using a priori multi-layer lightness statistics acquired from high-quality images. Our work makes three important contributions: designing a novel multi-layer image enhancement model; deriving the multi-layer lightness statistics of high-quality outdoor images, which are incorporated into the multi-layer enhancement model; and showing that the overall quality rating of enhanced images is consistent with a combination of contrast enhancement and naturalness preservation. Two separate human observer evaluation studies were conducted on naturalness preservation and overall image quality. The results showed the proposed method outperformed four compared state-of-the-art enhancement methods.
Collapse
|
49
|
Ji L, Rossi V, Pourtois G. Mean emotion from multiple facial expressions can be extracted with limited attention: Evidence from visual ERPs. Neuropsychologia 2018; 111:92-102. [PMID: 29371095 DOI: 10.1016/j.neuropsychologia.2018.01.022] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/04/2017] [Accepted: 01/16/2018] [Indexed: 11/30/2022]
Abstract
Human observers can readily extract the mean emotion from multiple faces shown briefly. However, it remains currently debated whether this ability depends on attention or not. To address this question, in this study, we recorded lateralized event-related brain potentials (i.e., N2pc and SPCN) to track covert shifts of spatial attention, while healthy adult participants discriminated the mean emotion of four faces shown in the periphery at an attended or unattended spatial location, using a cueing technique. As a control condition, they were asked to discriminate the emotional expression of a single face shown in the periphery. Analyses of saccade-free data showed that the mean emotion discrimination ability was above chance level but statistically undistinguishable between the attended and unattended location, suggesting that attention was not a pre-requisite for averaging. Interestingly, at the ERP level, covert shifts of spatial attention were captured by the N2pc and SPCN components. All together, these novel findings suggest that averaging multiple facial expressions shown in the periphery can operate with limited attention.
Collapse
Affiliation(s)
- Luyan Ji
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium.
| | - Valentina Rossi
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Gilles Pourtois
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
50
|
Abstract
The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.
Collapse
|