1
|
Gehdu BK, Tsantani M, Press C, Gray KL, Cook R. Recognition of facial expressions in autism: Effects of face masks and alexithymia. Q J Exp Psychol (Hove) 2023; 76:2854-2864. [PMID: 36872641 DOI: 10.1177/17470218231163007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
Abstract
It is often assumed that the recognition of facial expressions is impaired in autism. However, recent evidence suggests that reports of expression recognition difficulties in autistic participants may be attributable to co-occurring alexithymia-a trait associated with difficulties interpreting interoceptive and emotional states-not autism per se. Due to problems fixating on the eye-region, autistic individuals may be more reliant on information from the mouth region when judging facial expressions. As such, it may be easier to detect expression recognition deficits attributable to autism, not alexithymia, when participants are forced to base expression judgements on the eye-region alone. To test this possibility, we compared the ability of autistic participants (with and without high levels of alexithymia) and non-autistic controls to categorise facial expressions (a) when the whole face was visible, and (b) when the lower portion of the face was covered with a surgical mask. High-alexithymic autistic participants showed clear evidence of expression recognition difficulties: they correctly categorised fewer expressions than non-autistic controls. In contrast, low-alexithymic autistic participants were unimpaired relative to non-autistic controls. The same pattern of results was seen when judging masked and unmasked expression stimuli. In sum, we find no evidence for an expression recognition deficit attributable to autism, in the absence of high levels of co-occurring alexithymia, either when participants judge whole-face stimuli or just the eye-region. These findings underscore the influence of co-occurring alexithymia on expression recognition in autism.
Collapse
Affiliation(s)
- Bayparvah Kaur Gehdu
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Clare Press
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Wellcome Centre for Human Neuroimaging, University College London (UCL), London, UK
| | - Katie Lh Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- School of Psychology, University of Leeds, Leeds, UK
| |
Collapse
|
2
|
Barzy M, Morgan R, Cook R, Gray KLH. Are social interactions preferentially attended in real-world scenes? Evidence from change blindness. Q J Exp Psychol (Hove) 2023; 76:2293-2302. [PMID: 36847458 PMCID: PMC10503233 DOI: 10.1177/17470218231161044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/03/2022] [Accepted: 11/02/2022] [Indexed: 03/01/2023]
Abstract
In change detection paradigms, changes to social or animate aspects of a scene are detected better and faster compared with non-social or inanimate aspects. While previous studies have focused on how changes to individual faces/bodies are detected, it is possible that individuals presented within a social interaction may be further prioritised, as the accurate interpretation of social interactions may convey a competitive advantage. Over three experiments, we explored change detection to complex real-world scenes, in which changes either occurred by the removal of (a) an individual on their own, (b) an individual who was interacting with others, or (c) an object. In Experiment 1 (N = 50), we measured change detection for non-interacting individuals versus objects. In Experiment 2 (N = 49), we measured change detection for interacting individuals versus objects. Finally, in Experiment 3 (N = 85), we measured change detection for non-interacting versus interacting individuals. We also ran an inverted version of each task to determine whether differences were driven by low-level visual features. In Experiments 1 and 2, we found that changes to non-interacting and interacting individuals were detected better and more quickly than changes to objects. We also found inversion effects for both non-interaction and interaction changes, whereby they were detected more quickly when upright compared with inverted. No such inversion effect was seen for objects. This suggests that the high-level, social content of the images was driving the faster change detection for social versus object targets. Finally, we found that changes to individuals in non-interactions were detected faster than those presented within an interaction. Our results replicate the social advantage often found in change detection paradigms. However, we find that changes to individuals presented within social interaction configurations do not appear to be more quickly and easily detected than those in non-interacting configurations.
Collapse
Affiliation(s)
- Mahsa Barzy
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Rachel Morgan
- School of Mathematics and Statistics, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Department of Psychology, University of York, York, UK
| | - Katie LH Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
3
|
Kuroki D, Pronk T. jsQuestPlus: A JavaScript implementation of the QUEST+ method for estimating psychometric function parameters in online experiments. Behav Res Methods 2023; 55:3179-3186. [PMID: 36070128 PMCID: PMC9450820 DOI: 10.3758/s13428-022-01948-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2022] [Indexed: 11/08/2022]
Abstract
The two Bayesian adaptive psychometric methods named QUEST (Watson & Pelli, 1983) and QUEST+ (Watson, 2017) are widely used to estimate psychometric parameters, especially the threshold, in laboratory-based psychophysical experiments. Considering the increase of online psychophysical experiments in recent years, there is a growing need to have the QUEST and QUEST+ methods available online as well. We developed JavaScript libraries for both, with this article introducing one of them: jsQuestPlus. We offer integrations with online experimental tools such as jsPsych (de Leeuw, 2015), PsychoPy/JS (Peirce et al., 2019), and lab.js (Henninger et al., 2021). We measured the computation time required by jsQuestPlus under four conditions. Our simulations on 37 browser-computer combinations showed that the mean initialization time was 461.08 ms, 95% CI [328.29, 593.87], the mean computation time required to determine the stimulus parameters for the next trial was less than 1 ms, and the mean update time was 79.39 ms, 95% CI [46.22, 112.55] even in extremely demanding conditions. Additionally, psychometric parameters were estimated as accurately as the original QUEST+ method did. We conclude that jsQuestPlus is fast and accurate enough to conduct online psychophysical experiments despite the complexity of the matrix calculations. The latest version of jsQuestPlus can be downloaded freely from https://github.com/kurokida/jsQuestPlus under the MIT license.
Collapse
Affiliation(s)
- Daiichiro Kuroki
- Department of Psychology, School of Letters, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan.
| | - Thomas Pronk
- Behavioural Science Lab, Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
4
|
Pérez-Fabello MJ, Campos A. The Müller-Lyer illusion through mental imagery. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03979-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
AbstractPrevious studies have pointed to a link between visual perception and mental imagery. The present experiment focuses on one of the best-known illusions, the Müller-Lyer illusion, now reproduced under conditions of real perception and by means of imagery. To that purpose, a tailored ad-hoc set of combined figures was presented to a total of 161 fine art students (M age = 20,34, SD = 1,75) who individually worked with two different variations of the Müller-Lyer figures which consisted of a 10 mm long shaft and two fins set at an angle of 30º, being 15 mm long in one instance and 45 mm long in the other. In small groups, participants also completed an image control questionnaire. Results yielded that the longer the oblique lines, the larger the magnitude of the illusion both in the situation of real perception and in the imaginary situation. Also, the magnitude of the illusion augmented in the situation of perception in contrast to the imaginary situation, both with 15 mm long fins and with those of 45 mm. However, no significant differences were found in the magnitude of the illusion between high and low individuals in image control, although interactions between image control and other variables were indeed significant. The consistency of the outcome is a step forward in the study of illusions through mental images and opens the door to new lines of research that could involve innovative methods of analysis, different versions of the illusion and wider groups of participants.
Collapse
|
5
|
Yu L, Li Y. A study of practical drawing skills and knowledge transferable skills of children based on STEAM education. Front Psychol 2022; 13:1001521. [DOI: 10.3389/fpsyg.2022.1001521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 09/15/2022] [Indexed: 11/13/2022] Open
Abstract
The STEAM education involves children’s ability to integrate and apply their knowledge of science, technology, engineering, arts, and mathematics. The application and transfer of interdisciplinary knowledge in practical activities is the structure of STEAM education. This study assesses children’s practical drawing skills and transferable skills based on the global features of their realistic figure drawing. The drawings incorporate the visual information and the multidisciplinary knowledge that children acquire. The assessment variables of the global features are observation perspectives, baseline, and comparison. The results showed that most children present their works through the front view. The children of different age groups show differences in express baseline and comparison features. Boys and girls show some variances in baseline features. Moreover, children are relatively unskilled at applying interdisciplinary knowledge in their drawings.
Collapse
|
6
|
Tsantani M, Gray KLH, Cook R. New evidence of impaired expression recognition in developmental prosopagnosia. Cortex 2022; 154:15-26. [PMID: 35728295 DOI: 10.1016/j.cortex.2022.05.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 01/27/2022] [Accepted: 05/05/2022] [Indexed: 11/26/2022]
Abstract
Developmental prosopagnosia (DP) is a neurodevelopmental condition characterized by lifelong face recognition difficulties. To date, it remains unclear whether or not individuals with DP experience impaired recognition of facial expressions. It has been proposed that DPs may have sufficient perceptual ability to correctly interpret facial expressions when tasks are relatively easy (e.g., the stimuli are unambiguous and viewing conditions are optimal), but exhibit subtle impairments when tested under more challenging conditions. In the present study, we sought to take advantage of the COVID-19 pandemic to test this view. It is well-established that the surgical-type masks worn during the pandemic hinder the recognition and interpretation of facial emotion in typical participants. Relative to typical participants, we hypothesized that DPs may be disproportionately impaired when asked to interpret the facial emotion of people wearing face masks. We compared the ability of 34 DPs and 60 age-matched typical controls to recognize facial emotions i) when the whole face is visible, and ii) when the lower portion of the face is covered with a surgical mask. When expression stimuli were viewed without a mask, the DPs and typical controls exhibited similar levels of performance. However, when expression stimuli were shown with a mask, the DPs showed signs of subtle expression recognition deficits. The DPs were particularly prone to mislabeling masked expressions of happiness as emotion neutral. These results add to a growing body of evidence that under some conditions, DPs do exhibit subtle deficits of expression recognition.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London, UK.
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, UK
| |
Collapse
|
7
|
Gehdu BK, Gray KLH, Cook R. Impaired grouping of ambient facial images in autism. Sci Rep 2022; 12:6665. [PMID: 35461345 PMCID: PMC9035147 DOI: 10.1038/s41598-022-10630-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/04/2022] [Indexed: 11/27/2022] Open
Abstract
Ambient facial images depict individuals from a variety of viewing angles, with a range of poses and expressions, under different lighting conditions. Exposure to ambient images is thought to help observers form robust representations of the individuals depicted. Previous results suggest that autistic people may derive less benefit from exposure to this exemplar variation than non-autistic people. To date, however, it remains unclear why. One possibility is that autistic individuals possess atypical perceptual learning mechanisms. Alternatively, however, the learning mechanisms may be intact, but receive low-quality perceptual input from face encoding processes. To examine this second possibility, we investigated whether autistic people are less able to group ambient images of unfamiliar individuals based on their identity. Participants were asked to identify which of four ambient images depicted an oddball identity. Each trial assessed the grouping of different facial identities, thereby preventing face learning across trials. As such, the task assessed participants’ ability to group ambient images of unfamiliar people. In two experiments we found that matched non-autistic controls correctly identified the oddball identities more often than our autistic participants. These results imply that poor face learning from variation by autistic individuals may well be attributable to low-quality perceptual input, not aberrant learning mechanisms.
Collapse
Affiliation(s)
- Bayparvah Kaur Gehdu
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK. .,Department of Psychology, University of York, York, UK.
| |
Collapse
|
8
|
Vestner T, Balsys E, Over H, Cook R. The self-consistency effect seen on the Dot Perspective Task is a product of domain-general attention cueing, not automatic perspective taking. Cognition 2022; 224:105056. [PMID: 35149309 DOI: 10.1016/j.cognition.2022.105056] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/29/2022] [Accepted: 02/01/2022] [Indexed: 11/19/2022]
Abstract
It has been proposed that humans automatically compute the visual perspective of others. Evidence for this view comes from the Dot Perspective Task. In this task, participants view a room in which a human actor is depicted, looking either leftwards or rightwards. Dots can appear on either the left wall of the room, the right wall, or both. At the start of each trial, participants are shown a number. Their speeded task is to decide whether the number of dots visible matches the number shown. On consistent trials the participant and the actor can see the same number of dots. On inconsistent trials, the participant and the actor can see a different number of dots. Participants respond faster on consistent trials than on inconsistent trials. This self-consistency effect is cited as evidence that participants compute the visual perspective of others automatically, even when it impedes their task performance. According to a rival interpretation, however, this effect is a product of attention cueing: slower responding on inconsistent trials simply reflects the fact that participants' attention is directed away from some or all of the to-be-counted dots. The present study sought to test these rival accounts. We find that desk fans, a class of inanimate object known to cue attention, also produce the self-consistency effect. Moreover, people who are more susceptible to the effect induced by fans tend to be more susceptible to the effect induced by human actors. These findings suggest that the self-consistency effect is a product of attention cueing.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Elizabeth Balsys
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
9
|
Tsantani M, Podgajecka V, Gray KLH, Cook R. How does the presence of a surgical face mask impair the perceived intensity of facial emotions? PLoS One 2022; 17:e0262344. [PMID: 35025948 PMCID: PMC8758043 DOI: 10.1371/journal.pone.0262344] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 12/22/2021] [Indexed: 12/31/2022] Open
Abstract
The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- * E-mail:
| | - Vita Podgajecka
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Katie L. H. Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- Department of Psychology, University of York, York, United Kingdom
| |
Collapse
|
10
|
Tsantani M, Vestner T, Cook R. The Twenty Item Prosopagnosia Index (PI20) provides meaningful evidence of face recognition impairment. ROYAL SOCIETY OPEN SCIENCE 2021; 8:202062. [PMID: 34737872 PMCID: PMC8564608 DOI: 10.1098/rsos.202062] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 10/06/2021] [Indexed: 05/04/2023]
Abstract
The Twenty Item Prosopagnosia Index (PI20) is a self-report questionnaire used for quantifying prosopagnosic traits. This scale is intended to help researchers identify cases of developmental prosopagnosia by providing standardized self-report evidence to complement diagnostic evidence obtained from objective computer-based tasks. In order to respond appropriately to items, prosopagnosics must have some insight that their face recognition is well below average, while non-prosopagnosics need to understand that their relative face recognition ability falls within the typical range. There has been considerable debate about whether participants have the necessary insight into their face recognition abilities to respond appropriately. In the present study, we sought to determine whether the PI20 provides meaningful evidence of face recognition impairment. In keeping with the intended use of the instrument, we used PI20 scores to identify two groups: high-PI20 scorers (those with self-reported face recognition difficulties) and low-PI20 scorers (those with no self-reported face recognition difficulties). We found that participant groups distinguished on the basis of PI20 scores clearly differed in terms of their mean performance on objective measures of face recognition ability. We also found that high-PI20 scorers were more likely to achieve levels of face recognition accuracy associated with developmental prosopagnosia.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
11
|
Vestner T, Over H, Gray KLH, Tipper SP, Cook R. Searching for people: Non-facing distractor pairs hinder the visual search of social scenes more than facing distractor pairs. Cognition 2021; 214:104737. [PMID: 33901835 PMCID: PMC8346951 DOI: 10.1016/j.cognition.2021.104737] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 03/05/2021] [Accepted: 04/12/2021] [Indexed: 11/24/2022]
Abstract
There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|