1
|
Hunter B, Montgomery B, Sridhar A, Markant J. Endogenous Control and Reward-based Mechanisms Shape Infants' Attention Biases to Caregiver Faces. Dev Psychobiol 2024; 66:e22521. [PMID: 38952248 DOI: 10.1002/dev.22521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 04/30/2024] [Accepted: 05/25/2024] [Indexed: 07/03/2024]
Abstract
Infants rely on developing attention skills to identify relevant stimuli in their environments. Although caregivers are socially rewarding and a critical source of information, they are also one of many stimuli that compete for infants' attention. Young infants preferentially hold attention on caregiver faces, but it is unknown whether they also preferentially orient to caregivers and the extent to which these attention biases reflect reward-based attention mechanisms. To address these questions, we measured 4- to 10-month-old infants' (N = 64) frequency of orienting and duration of looking to caregiver and stranger faces within multi-item arrays. We also assessed whether infants' attention to these faces related to individual differences in Surgency, an indirect index of reward sensitivity. Although infants did not show biased attention to caregiver versus stranger faces at the group level, infants were increasingly biased to orient to stranger faces with age and infants with higher Surgency scores showed more robust attention orienting and attention holding biases to caregiver faces. These effects varied based on the selective attention demands of the task, suggesting that infants' attention biases to caregiver faces may reflect both developing attention control skills and reward-based attention mechanisms.
Collapse
Affiliation(s)
- Brianna Hunter
- Center for Mind and Brain, University of California Davis, Davis, California, USA
- Department of Psychology, Tulane University, New Orleans, Louisiana, USA
| | - Brooke Montgomery
- Department of Psychology, Tulane University, New Orleans, Louisiana, USA
| | - Aditi Sridhar
- Department of Psychology, Ashoka University, Sonipat, India
| | - Julie Markant
- Department of Psychology, Tulane University, New Orleans, Louisiana, USA
- Tulane Brain Institute, Tulane University, New Orleans, Louisiana, USA
| |
Collapse
|
2
|
Prunty JE, Jenkins R, Qarooni R, Bindemann M. A cognitive template for human face detection. Cognition 2024; 249:105792. [PMID: 38763070 DOI: 10.1016/j.cognition.2024.105792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 02/05/2024] [Accepted: 04/08/2024] [Indexed: 05/21/2024]
Abstract
Faces are highly informative social stimuli, yet before any information can be accessed, the face must first be detected in the visual field. A detection template that serves this purpose must be able to accommodate the wide variety of face images we encounter, but how this generality could be achieved remains unknown. In this study, we investigate whether statistical averages of previously encountered faces can form the basis of a general face detection template. We provide converging evidence from a range of methods-human similarity judgements and PCA-based image analysis of face averages (Experiment 1-3), human detection behaviour for faces embedded in complex scenes (Experiment 4 and 5), and simulations with a template-matching algorithm (Experiment 6 and 7)-to examine the formation, stability and robustness of statistical image averages as cognitive templates for human face detection. We integrate these findings with existing knowledge of face identification, ensemble coding, and the development of face perception.
Collapse
Affiliation(s)
- Jonathan E Prunty
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK.
| | - Rob Jenkins
- Department of Psychology, University of York, York, UK
| | - Rana Qarooni
- Department of Psychology, University of York, York, UK
| | | |
Collapse
|
3
|
Prunty J, Jenkins R, Qarooni R, Bindemann M. Face detection in contextual scenes. PLoS One 2024; 19:e0304288. [PMID: 38865378 PMCID: PMC11168631 DOI: 10.1371/journal.pone.0304288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 05/09/2024] [Indexed: 06/14/2024] Open
Abstract
Object and scene perception are intertwined. When objects are expected to appear within a particular scene, they are detected and categorised with greater speed and accuracy. This study examined whether such context effects also moderate the perception of social objects such as faces. Female and male faces were embedded in scenes with a stereotypical female or male context. Semantic congruency of these scene contexts influenced the categorisation of faces (Experiment 1). These effects were bi-directional, such that face sex also affected scene categorisation (Experiment 2), suggesting concurrent automatic processing of both levels. In contrast, the more elementary task of face detection was not affected by semantic scene congruency (Experiment 3), even when scenes were previewed prior to face presentation (Experiment 4). This pattern of results indicates that semantic scene context can affect categorisation of faces. However, the earlier perceptual stage of detection appears to be encapsulated from the cognitive processes that give rise to this contextual interference.
Collapse
Affiliation(s)
- Jonathan Prunty
- School of Psychology, University of Kent, Canterbury, United Kingdom
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
| | - Rob Jenkins
- Department of Psychology, University of York, York, United Kingdom
| | - Rana Qarooni
- Department of Psychology, University of York, York, United Kingdom
| | - Markus Bindemann
- School of Psychology, University of Kent, Canterbury, United Kingdom
| |
Collapse
|
4
|
Zhang Y, Zhang H, Fu S. Relative saliency affects attentional capture and suppression of color and face singleton distractors: evidence from event-related potential studies. Cereb Cortex 2024; 34:bhae176. [PMID: 38679483 DOI: 10.1093/cercor/bhae176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 03/30/2024] [Accepted: 04/09/2024] [Indexed: 05/01/2024] Open
Abstract
Prior research has yet to fully elucidate the impact of varying relative saliency between target and distractor on attentional capture and suppression, along with their underlying neural mechanisms, especially when social (e.g. face) and perceptual (e.g. color) information interchangeably serve as singleton targets or distractors, competing for attention in a search array. Here, we employed an additional singleton paradigm to investigate the effects of relative saliency on attentional capture (as assessed by N2pc) and suppression (as assessed by PD) of color or face singleton distractors in a visual search task by recording event-related potentials. We found that face singleton distractors with higher relative saliency induced stronger attentional processing. Furthermore, enhancing the physical salience of colors using a bold color ring could enhance attentional processing toward color singleton distractors. Reducing the physical salience of facial stimuli by blurring weakened attentional processing toward face singleton distractors; however, blurring enhanced attentional processing toward color singleton distractors because of the change in relative saliency. In conclusion, the attentional processes of singleton distractors are affected by their relative saliency to singleton targets, with higher relative saliency of singleton distractors resulting in stronger attentional capture and suppression; faces, however, exhibit some specificity in attentional capture and suppression due to high social saliency.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 230 Wai Huan Xi Road, Guangzhou Higher Education Mega Center, Guangzhou 510006, China
| | - Hai Zhang
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 230 Wai Huan Xi Road, Guangzhou Higher Education Mega Center, Guangzhou 510006, China
| | - Shimin Fu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 230 Wai Huan Xi Road, Guangzhou Higher Education Mega Center, Guangzhou 510006, China
| |
Collapse
|
5
|
Zeng G, Simpson EA, Paukner A. Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest. Behav Res Methods 2024; 56:881-907. [PMID: 36890330 DOI: 10.3758/s13428-022-02056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 03/10/2023]
Abstract
Remote eye tracking with automated corneal reflection provides insights into the emergence and development of cognitive, social, and emotional functions in human infants and non-human primates. However, because most eye-tracking systems were designed for use in human adults, the accuracy of eye-tracking data collected in other populations is unclear, as are potential approaches to minimize measurement error. For instance, data quality may differ across species or ages, which are necessary considerations for comparative and developmental studies. Here we examined how the calibration method and adjustments to areas of interest (AOIs) of the Tobii TX300 changed the mapping of fixations to AOIs in a cross-species longitudinal study. We tested humans (N = 119) at 2, 4, 6, 8, and 14 months of age and macaques (Macaca mulatta; N = 21) at 2 weeks, 3 weeks, and 6 months of age. In all groups, we found improvement in the proportion of AOI hits detected as the number of successful calibration points increased, suggesting calibration approaches with more points may be advantageous. Spatially enlarging and temporally prolonging AOIs increased the number of fixation-AOI mappings, suggesting improvements in capturing infants' gaze behaviors; however, these benefits varied across age groups and species, suggesting different parameters may be ideal, depending on the population studied. In sum, to maximize usable sessions and minimize measurement error, eye-tracking data collection and extraction approaches may need adjustments for the age groups and species studied. Doing so may make it easier to standardize and replicate eye-tracking research findings.
Collapse
Affiliation(s)
- Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
6
|
Taubert J, Wally S, Dixson BJ. Preliminary evidence of an increased susceptibility to face pareidolia in postpartum women. Biol Lett 2023; 19:20230126. [PMID: 37700700 PMCID: PMC10498352 DOI: 10.1098/rsbl.2023.0126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 08/24/2023] [Indexed: 09/14/2023] Open
Abstract
As primates, we are hypersensitive to faces and face-like patterns in the visual environment, hence we often perceive illusory faces in otherwise inanimate objects, such as burnt pieces of toast and the surface of the moon. Although this phenomenon, known as face pareidolia, is a common experience, it is unknown whether our susceptibility to face pareidolia is static across our lifespan or what factors would cause it to change. Given the evidence that behaviour towards face stimuli is modulated by the neuropeptide oxytocin (OT), we reasoned that participants in stages of life associated with high levels of endogenous OT might be more susceptible to face pareidolia than participants in other stages of life. We tested this hypothesis by assessing pareidolia susceptibility in two groups of women; pregnant women (low endogenous OT) and postpartum women (high endogenous OT). We found evidence that postpartum women report seeing face pareidolia more easily than women who are currently pregnant. These data, collected online, suggest that our sensitivity to face-like patterns is not fixed and may change throughout adulthood, providing a crucial proof of concept that requires further research.
Collapse
Affiliation(s)
- Jessica Taubert
- School of Psychology, The University of Queensland, McElwain Building, St Lucia, 4072 Brisbane, Queensland, Australia
| | - Samantha Wally
- School of Psychology, The University of Queensland, McElwain Building, St Lucia, 4072 Brisbane, Queensland, Australia
| | - Barnaby J. Dixson
- School of Psychology, The University of Queensland, McElwain Building, St Lucia, 4072 Brisbane, Queensland, Australia
- Psychology and Social Sciences, The University of Sunshine Coast, Sippy Downs, Australia
| |
Collapse
|
7
|
Pareidolic faces receive prioritized attention in the dot-probe task. Atten Percept Psychophys 2023; 85:1106-1126. [PMID: 36918509 DOI: 10.3758/s13414-023-02685-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 03/16/2023]
Abstract
Face pareidolia occurs when random or ambiguous inanimate objects are perceived as faces. While real faces automatically receive prioritized attention compared with nonface objects, it is unclear whether pareidolic faces similarly receive special attention. We hypothesized that, given the evolutionary importance of broadly detecting animacy, pareidolic faces may have enough faceness to activate a broad face template, triggering prioritized attention. To test this hypothesis, and to explore where along the faceness continuum pareidolic faces fall, we conducted a series of dot-probe experiments in which we paired pareidolic faces with other images directly competing for attention: objects, animal faces, and human faces. We found that pareidolic faces elicited more prioritized attention than objects, a process that was disrupted by inversion, suggesting this prioritized attention was unlikely to be driven by low-level features. However, unexpectedly, pareidolic faces received more privileged attention compared with animal faces and showed similar prioritized attention to human faces. This attentional efficiency may be due to pareidolic faces being perceived as not only face-like, but also as human-like, and having larger facial features-eyes and mouths-compared with real faces. Together, our findings suggest that pareidolic faces appear automatically attentionally privileged, similar to human faces. Our findings are consistent with the proposal of a highly sensitive broad face detection system that is activated by pareidolic faces, triggering false alarms (i.e., illusory faces), which, evolutionarily, are less detrimental relative to missing potentially relevant signals (e.g., conspecific or heterospecific threats). In sum, pareidolic faces appear "special" in attracting attention.
Collapse
|
8
|
Bertels J, de Heering A, Bourguignon M, Cleeremans A, Destrebecqz A. What determines the neural response to snakes in the infant brain? A systematic comparison of color and grayscale stimuli. Front Psychol 2023; 14:1027872. [PMID: 36993883 PMCID: PMC10040846 DOI: 10.3389/fpsyg.2023.1027872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 02/13/2023] [Indexed: 03/18/2023] Open
Abstract
Snakes and primates have coexisted for thousands of years. Given that snakes are the first of the major primate predators, natural selection may have favored primates whose snake detection abilities allowed for better defensive behavior. Aligning with this idea, we recently provided evidence for an inborn mechanism anchored in the human brain that promptly detects snakes, based on their characteristic visual features. What are the critical visual features driving human neural responses to snakes is an unresolved issue. While their prototypical curvilinear coiled shape seems of major importance, it remains possible that the brain responds to a blend of other visual features. Coloration, in particular, might be of major importance, as it has been shown to act as a powerful aposematic signal. Here, we specifically examine whether color impacts snake-specific responses in the naive, immature infant brain. For this purpose, we recorded the brain activity of 6-to 11-month-old infants using electroencephalography (EEG), while they watched sequences of color or grayscale animal pictures flickering at a periodic rate. We showed that glancing at colored and grayscale snakes generated specific neural responses in the occipital region of the brain. Color did not exert a major influence on the infant brain response but strongly increased the attention devoted to the visual streams. Remarkably, age predicted the strength of the snake-specific response. These results highlight that the expression of the brain-anchored reaction to coiled snakes bears on the refinement of the visual system.
Collapse
Affiliation(s)
- Julie Bertels
- ULBabyLab, Consciousness, Cognition and Computation Group (CO3), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
- Laboratoire de Neuroanatomie et de Neuroimagerie Translationnelles (LNT), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
- *Correspondence: Julie Bertels,
| | - Adelaïde de Heering
- LulLABy, Unité de Recherche en Neurosciences Cognitives (UNESCOG), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Neuroanatomie et de Neuroimagerie Translationnelles (LNT), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
- Laboratory of Neurophysiology and Movement Biomechanics, ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Axel Cleeremans
- ULBabyLab, Consciousness, Cognition and Computation Group (CO3), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Arnaud Destrebecqz
- ULBabyLab, Consciousness, Cognition and Computation Group (CO3), Center for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
9
|
Tsurumi S, Kanazawa S, Yamaguchi MK, Kawahara JI. Development of upper visual field bias for faces in infants. Dev Sci 2023; 26:e13262. [PMID: 35340093 PMCID: PMC10078383 DOI: 10.1111/desc.13262] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/11/2022] [Accepted: 03/22/2022] [Indexed: 12/15/2022]
Abstract
The spatial location of the face and body seen in daily life influences human perception and recognition. This contextual effect of spatial locations suggests that daily experience affects how humans visually process the face and body. However, it remains unclear whether this effect is caused by experience, or innate neural pathways. To address this issue, we examined the development of visual field asymmetry for face processing, in which faces in the upper visual field were processed preferentially compared to the lower visual field. We found that a developmental change occurred between 6 and 7 months. Older infants aged 7-8 months showed bias toward faces in the upper visual field, similar to adults, but younger infants of 5-6 months showed no such visual field bias. Furthermore, older infants preferentially memorized faces in the upper visual field, rather than in the lower visual field. These results suggest that visual field asymmetry is acquired through development, and might be caused by the learning of spatial location in daily experience.
Collapse
Affiliation(s)
- Shuma Tsurumi
- Department of Psychology, Chuo University, Hachioji, Tokyo, Japan.,Japan Society for the Promotion of Science, Chiyoda-ku, Tokyo, Japan
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Bunkyo-ku, Tokyo, Japan
| | | | | |
Collapse
|
10
|
Abassi Abu Rukab S, Khayat N, Hochstein S. High-level visual search in children with autism. J Vis 2022; 22:6. [PMID: 35994261 PMCID: PMC9419456 DOI: 10.1167/jov.22.9.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 06/12/2022] [Indexed: 11/24/2022] Open
Abstract
Visual search has been classified as easy feature search, with rapid target detection and little set size dependence, versus slower difficult search with focused attention, with set size-dependent speed. Reverse hierarchy theory attributes these classes to rapid high cortical-level vision at a glance versus low-level vision with scrutiny, attributing easy search to high-level representations. Accordingly, faces "pop out" of heterogeneous object photographs. Individuals with autism have difficulties recognizing faces, and we now asked if this disability disturbs their search for faces. We compare search times and set size slopes for children with autism spectrum disorders (ASDs) and those with neurotypical development (NT) when searching for faces. Human face targets were found rapidly, with shallow set size slopes. The between-group difference between slopes (18.8 vs. 11.3 ms/item) is significant, suggesting that faces may not "pop out" as easily, but in our view does not warrant classifying ASD face search as categorically different from that of NT children. We also tested search for different target categories, dog and lion faces, and nonface basic categories, cars and houses. The ASD group was generally a bit slower than the NT group, and their slopes were somewhat steeper. Nevertheless, the overall dependencies on target category were similar: human face search fastest, nonface categories slowest, and dog and lion faces in between. We conclude that autism may spare vision at a glance, including face detection, despite its reported effects on face recognition, which may require vision with scrutiny. This dichotomy is consistent with the two perceptual modes suggested by reverse hierarchy theory.
Collapse
Affiliation(s)
- Safa'a Abassi Abu Rukab
- ELSC Edmond & Lily Safra Center for Brain Research and Silberman Institute for Life Sciences, Hebrew University, Jerusalem, Israel
| | - Noam Khayat
- ELSC Edmond & Lily Safra Center for Brain Research and Silberman Institute for Life Sciences, Hebrew University, Jerusalem, Israel
| | - Shaul Hochstein
- ELSC Edmond & Lily Safra Center for Brain Research and Silberman Institute for Life Sciences, Hebrew University, Jerusalem, Israel
| |
Collapse
|
11
|
Prunty JE, Jenkins R, Qarooni R, Bindemann M. Ingroup and outgroup differences in face detection. Br J Psychol 2022; 114 Suppl 1:94-111. [PMID: 35876334 DOI: 10.1111/bjop.12588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 02/22/2022] [Accepted: 07/11/2022] [Indexed: 11/30/2022]
Abstract
Humans show improved recognition for faces from their own social group relative to faces from another social group. Yet before faces can be recognized, they must first be detected in the visual field. Here, we tested whether humans also show an ingroup bias at the earliest stage of face processing - the point at which the presence of a face is first detected. To this end, we measured viewers' ability to detect ingroup (Black and White) and outgroup faces (Asian, Black, and White) in everyday scenes. Ingroup faces were detected with greater speed and accuracy relative to outgroup faces (Experiment 1). Removing face hue impaired detection generally, but the ingroup detection advantage was undiminished (Experiment 2). This same pattern was replicated by a detection algorithm using face templates derived from human data (Experiment 3). These findings demonstrate that the established ingroup bias in face processing can extend to the early process of detection. This effect is 'colour blind', in the sense that group membership effects are independent of general effects of image hue. Moreover, it can be captured by tuning visual templates to reflect the statistics of observers' social experience. We conclude that group bias in face detection is both a visual and a social phenomenon.
Collapse
Affiliation(s)
| | - Rob Jenkins
- Department of Psychology, University of York, York, UK
| | - Rana Qarooni
- Department of Psychology, University of York, York, UK
| | | |
Collapse
|
12
|
Qarooni R, Prunty J, Bindemann M, Jenkins R. Capacity limits in face detection. Cognition 2022; 228:105227. [PMID: 35872362 DOI: 10.1016/j.cognition.2022.105227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 11/16/2022]
Abstract
Face detection is a prerequisite for further face processing, such as extracting identity or semantic information. Those later processes appear to be subject to strict capacity limits, but the location of the bottleneck is unclear. In particular, it is not known whether the bottleneck occurs before or after face detection. Here we present a novel test of capacity limits in face detection. Across four behavioural experiments, we assessed detection of multiple faces via observers' ability to differentiate between two types of display. Fixed displays comprised items of the same type (all faces or all non-faces). Mixed displays combined faces and non-faces. Critically, a 'fixed' response requires all items to be processed. We found that additional faces could be detected with no cost to efficiency, and that this capacity-free performance was contingent on visual context. The observed pattern was not specific to faces, but detection was more efficient for faces overall. Our findings suggest that strict capacity limits in face perception occur after the detection step.
Collapse
Affiliation(s)
- Rana Qarooni
- Department of Psychology, University of York, UK
| | | | | | - Rob Jenkins
- Department of Psychology, University of York, UK.
| |
Collapse
|
13
|
Pedale T, Mastroberardino S, Capurso M, Macrì S, Santangelo V. Developmental differences in the impact of perceptual salience on short-term memory performance and meta-memory skills. Sci Rep 2022; 12:8185. [PMID: 35581267 PMCID: PMC9113989 DOI: 10.1038/s41598-022-11624-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 04/21/2022] [Indexed: 11/09/2022] Open
Abstract
In everyday life, individuals are surrounded by many stimuli that compete to access attention and memory. Evidence shows that perceptually salient stimuli have more chances to capture attention resources, thus to be encoded into short-term memory (STM). However, the impact of perceptual salience on STM at different developmental stages is entirely unexplored. Here we assessed STM performance and meta-memory skills of 6, 10, and 18 years-old participants (total N = 169) using a delayed match-to-sample task. On each trial, participants freely explored a complex (cartoon-like) scene for 4 s. After a retention interval of 4 s, they discriminated the same/different position of a target-object extracted from the area of maximal or minimal salience of the initially-explored scene. Then, they provided a confidence judgment of their STM performance, as an index of meta-memory skills. When taking into account 'confident' responses, we found increased STM performance following targets at maximal versus minimal salience only in adult participants. Similarly, only adults showed enhanced meta-memory capabilities following maximal versus minimal salience targets. These findings documented a late development in the impact of perceptual salience on STM performance and in the improvement of metacognitive capabilities to properly judge the content of one's own memory representation.
Collapse
Affiliation(s)
- Tiziana Pedale
- Functional Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Serena Mastroberardino
- Department of Psychology, School of Medicine and Psychology, Sapienza University of Rome, Rome, Italy
| | - Michele Capurso
- Department of Philosophy, Social Sciences and Education, University of Perugia, Piazza G. Ermini 1, 06123, Perugia, Italy
| | - Simone Macrì
- Centre for Behavioural Sciences and Mental Health, Istituto Superiore di Sanità, Rome, Italy
| | - Valerio Santangelo
- Functional Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy. .,Department of Philosophy, Social Sciences and Education, University of Perugia, Piazza G. Ermini 1, 06123, Perugia, Italy.
| |
Collapse
|
14
|
Pomaranski KI, Hayes TR, Kwon MK, Henderson JM, Oakes LM. Developmental changes in natural scene viewing in infancy. Dev Psychol 2021; 57:1025-1041. [PMID: 34435820 PMCID: PMC8406411 DOI: 10.1037/dev0001020] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We extend decades of research on infants' visual processing by examining their eye gaze during viewing of natural scenes. We examined the eye movements of a racially diverse group of 4- to 12-month-old infants (N = 54; 27 boys; 24 infants were White and not Hispanic, 30 infants were African American, Asian American, mixed race and/or Hispanic) as they viewed images selected from the MIT Saliency Benchmark Project. In general, across this age range infants' fixation distributions became more consistent and more adult-like, suggesting that infants' fixations in natural scenes become increasingly more systematic. Evaluation of infants' fixation patterns with saliency maps generated by different models of physical salience revealed that although over this age range there was an increase in the correlations between infants' fixations and saliency, the amount of variance accounted for by salience actually decreased. At the youngest age, the amount of variance accounted for by salience was very similar to the consistency between infants' fixations, suggesting that the systematicity in these youngest infants' fixations was explained by their attention to physically salient regions. By 12 months, in contrast, the consistency between infants was greater than the variance accounted for by salience, suggesting that the systematicity in older infants' fixations reflected more than their attention to physically salient regions. Together these results show that infants' fixations when viewing natural scenes becomes more systematic and predictable, and that predictability is due to their attention to features other than physical salience. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
15
|
General and own-species attentional face biases. Atten Percept Psychophys 2020; 83:187-198. [PMID: 33025467 DOI: 10.3758/s13414-020-02132-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans demonstrate enhanced processing of human faces compared with animal faces, known as own-species bias. This bias is important for identifying people who may cause harm, as well as for recognizing friends and kin. However, growing evidence also indicates a more general face bias. Faces have high evolutionary importance beyond conspecific interactions, as they aid in detecting predators and prey. Few studies have explored the interaction of these biases together. In three experiments, we explored processing of human and animal faces, compared with each other and to nonface objects, which allowed us to examine both own-species and broader face biases. We used a dot-probe paradigm to examine human adults' covert attentional biases for task-irrelevant human faces, animal faces, and objects. We replicated the own-species attentional bias for human faces relative to animal faces. We also found an attentional bias for animal faces relative to objects, consistent with the proposal that faces broadly receive privileged processing. Our findings suggest that humans may be attracted to a broad class of faces. Further, we found that while participants rapidly attended to human faces across all cue display durations, they attended to animal faces only when they had sufficient time to process them. Our findings reveal that the dot-probe paradigm is sensitive for capturing both own-species and more general face biases, and that each has a different attentional signature, possibly reflecting their unique but overlapping evolutionary importance.
Collapse
|
16
|
Effects of Farmers’ Facial Expression on Consumers’ Responses in Print Advertising of Local Food: The Moderating Role of Emotional Intelligence. J FOOD QUALITY 2020. [DOI: 10.1155/2020/8823205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In the context of ethical consumption, we examine the effects of farmers’ facial expression in print advertising on consumers’ responses to local food. Furthermore, we try to verify the moderating role of emotional intelligence (EI) on consumers’ responses to the advertising message strategy. The advertising message strategy that connects farmers and consumers is expected to create more favorable responses among consumers toward local food and its retailers. This study examines consumers’ responses (perceived product quality, trust, and a positive attitude toward the local food retailer) to three conditions of farmers’ facial expression in the advertisement (neutral facial expression, positive facial expression, and product only, with no portrait) across two levels of EI (low and high). We find that farmers’ positive facial expressions in the advertisements have the greatest positive effects on consumers’ perceived product quality, trust, and attitude toward the local food retailer under a high level of EI. Therefore, individuals with a high level of EI were more influenced by facial expressions in print advertising, whereas those with a low level of EI were less influenced by facial expressions in print advertising, and their responses were indifferent to whether the local food farmer had a neutral or a positive facial expression in print advertising. Our findings suggest that marketing practitioners consider personal characteristics such as EI in persuading local food consumers in target markets to implement strategies to promote local food purchase and consumption.
Collapse
|
17
|
Maylott SE, Paukner A, Ahn YA, Simpson EA. Human and monkey infant attention to dynamic social and nonsocial stimuli. Dev Psychobiol 2020; 62:841-857. [PMID: 32424813 PMCID: PMC7944642 DOI: 10.1002/dev.21979] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/23/2020] [Accepted: 03/31/2020] [Indexed: 12/14/2022]
Abstract
The present study explored behavioral norms for infant social attention in typically developing human and nonhuman primate infants. We examined the normative development of attention to dynamic social and nonsocial stimuli longitudinally in macaques (Macaca mulatta) at 1, 3, and 5 months of age (N = 75) and humans at 2, 4, 6, 8, and 13 months of age (N = 69) using eye tracking. All infants viewed concurrently played silent videos-one social video and one nonsocial video. Both macaque and human infants were faster to look to the social than the nonsocial stimulus, and both species grew faster to orient to the social stimulus with age. Further, macaque infants' social attention increased linearly from 1 to 5 months. In contrast, human infants displayed a nonlinear pattern of social interest, with initially greater attention to the social stimulus, followed by a period of greater interest in the nonsocial stimulus, and then a rise in social interest from 6 to 13 months. Overall, human infants looked longer than macaque infants, suggesting humans have more sustained attention in the first year of life. These findings highlight potential species similarities and differences, and reflect a first step in establishing baseline patterns of early social attention development.
Collapse
Affiliation(s)
- Sarah E. Maylott
- Department of Psychology, University of Miami, Coral Gables, Florida, USA
| | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, England
| | - Yeojin A. Ahn
- Department of Psychology, University of Miami, Coral Gables, Florida, USA
| | | |
Collapse
|
18
|
Keenan B, Markant J. Differential sensitivity to species- and race-based information in the development of attention orienting and attention holding face biases in infancy. Dev Psychobiol 2020; 63:461-469. [PMID: 32803776 DOI: 10.1002/dev.22027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 07/25/2020] [Accepted: 07/30/2020] [Indexed: 02/05/2023]
Abstract
Experience-based biases in face processing can reflect both attention orienting biases that support efficient selection of faces from competing stimuli and attention holding biases that allow for detailed encoding of selected faces. It is well established that infants demonstrate both species- and race-based biases in attention holding. Fewer studies have found species-based, but not race-based, orienting biases in infancy but these studies examined species- and race-based biases separately and measured overall orienting without examining attention to distractors. The present study directly compared 6- and 11-month-old infants' species- and race-based biases in attention holding and orienting to faces. We measured infants' duration of looking and frequency/speed of orienting to own-race, other-race, and monkey faces in multi-item search arrays, and their frequency of orienting to faces and distractors during search. Infants showed expected species- and race-based biases in attention holding but only a species-based bias in overall orienting. However, they also showed reduced orienting to salient distractors in the context of own-race faces. These results suggest that orienting mechanisms mediating face selection are robustly driven by species information while orienting to faces versus distractors during search may also reflect prior learning about frequently experienced own-race faces.
Collapse
Affiliation(s)
- Brianna Keenan
- Department of Psychology, Tulane University, New Orleans, Louisiana, USA
| | - Julie Markant
- Department of Psychology, Tulane University, New Orleans, Louisiana, USA.,Tulane Brain Institute, Tulane University, New Orleans, Louisiana, USA
| |
Collapse
|
19
|
Simpson EA, Maylott SE, Mitsven SG, Zeng G, Jakobsen KV. Face detection in 2- to 6-month-old infants is influenced by gaze direction and species. Dev Sci 2019; 23:e12902. [PMID: 31505079 DOI: 10.1111/desc.12902] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 07/10/2019] [Accepted: 08/30/2019] [Indexed: 11/29/2022]
Abstract
Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills-attending to faces and attending to the eyes-surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2-, 4-, and 6-month-olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following-the ability to look where another individual looks-which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.
Collapse
Affiliation(s)
| | - Sarah E Maylott
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | |
Collapse
|