1
|
Soto FA, Beevers CG. Perceptual Observer Modeling Reveals Likely Mechanisms of Face Expression Recognition Deficits in Depression. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2024:S2451-9022(24)00044-2. [PMID: 38336169 DOI: 10.1016/j.bpsc.2024.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND Deficits in face emotion recognition are well documented in depression, but the underlying mechanisms are poorly understood. Psychophysical observer models provide a way to precisely characterize such mechanisms. Using model-based analyses, we tested 2 hypotheses about how depression may reduce sensitivity to detect face emotion: 1) via a change in selectivity for visual information diagnostic of emotion or 2) via a change in signal-to-noise ratio in the system performing emotion detection. METHODS Sixty adults, one half meeting criteria for major depressive disorder and the other half healthy control participants, identified sadness and happiness in noisy face stimuli, and their responses were used to estimate templates encoding the visual information used for emotion identification. We analyzed these templates using traditional and model-based analyses; in the latter, the match between templates and stimuli, representing sensory evidence for the information encoded in the template, was compared against behavioral data. RESULTS Estimated happiness templates produced sensory evidence that was less strongly correlated with response times in participants with depression than in control participants, suggesting that depression was associated with a reduced signal-to-noise ratio in the detection of happiness. The opposite results were found for the detection of sadness. We found little evidence that depression was accompanied by changes in selectivity (i.e., information used to detect emotion), but depression was associated with a stronger influence of face identity on selectivity. CONCLUSIONS Depression is more strongly associated with changes in signal-to-noise ratio during emotion recognition, suggesting that deficits in emotion detection are driven primarily by deprecated signal quality rather than suboptimal sampling of information used to detect emotion.
Collapse
Affiliation(s)
- Fabian A Soto
- Department of Psychology, Florida International University, Miami, Florida
| | | |
Collapse
|
2
|
Wang Z, Manassi M, Ren Z, Ghirardo C, Canas-Bajo T, Murai Y, Zhou M, Whitney D. Idiosyncratic biases in the perception of medical images. Front Psychol 2022; 13:1049831. [PMID: 36600706 PMCID: PMC9806180 DOI: 10.3389/fpsyg.2022.1049831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction Radiologists routinely make life-altering decisions. Optimizing these decisions has been an important goal for many years and has prompted a great deal of research on the basic perceptual mechanisms that underlie radiologists' decisions. Previous studies have found that there are substantial individual differences in radiologists' diagnostic performance (e.g., sensitivity) due to experience, training, or search strategies. In addition to variations in sensitivity, however, another possibility is that radiologists might have perceptual biases-systematic misperceptions of visual stimuli. Although a great deal of research has investigated radiologist sensitivity, very little has explored the presence of perceptual biases or the individual differences in these. Methods Here, we test whether radiologists' have perceptual biases using controlled artificial and Generative Adversarial Networks-generated realistic medical images. In Experiment 1, observers adjusted the appearance of simulated tumors to match the previously shown targets. In Experiment 2, observers were shown with a mix of real and GAN-generated CT lesion images and they rated the realness of each image. Results We show that every tested individual radiologist was characterized by unique and systematic perceptual biases; these perceptual biases cannot be simply explained by attentional differences, and they can be observed in different imaging modalities and task settings, suggesting that idiosyncratic biases in medical image perception may widely exist. Discussion Characterizing and understanding these biases could be important for many practical settings such as training, pairing readers, and career selection for radiologists. These results may have consequential implications for many other fields as well, where individual observers are the linchpins for life-altering perceptual decisions.
Collapse
Affiliation(s)
- Zixuan Wang
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States,*Correspondence: Zixuan Wang,
| | - Mauro Manassi
- School of Psychology, University of Aberdeen, King’s College, Aberdeen, United Kingdom
| | - Zhihang Ren
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States,Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
| | - Cristina Ghirardo
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Teresa Canas-Bajo
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States,Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
| | - Yuki Murai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Koganei, Japan
| | - Min Zhou
- Department of Pediatrics, The First People's Hospital of Shuangliu District, Chengdu, Sichuan, China
| | - David Whitney
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States,Vision Science Group, University of California, Berkeley, Berkeley, CA, United States,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
3
|
Perceived similarity ratings predict generalization success after traditional category learning and a new paired-associate learning task. Psychon Bull Rev 2021; 27:791-800. [PMID: 32472329 DOI: 10.3758/s13423-020-01754-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current study investigated category learning across two experiments using face-blend stimuli that formed face families controlled for within- and between-category similarity. Experiment 1 was a traditional feedback-based category-learning task, with three family names serving as category labels. In Experiment 2, the shared family name was encountered in the context of a face-full name paired-associate learning task, with a unique first name for each face. A subsequent test that required participants to categorize new faces from each family showed successful generalization in both experiments. Furthermore, perceived similarity ratings for pairs of faces were collected before and after learning, prior to generalization test. In Experiment 1, similarity ratings increased for faces within a family and decreased for faces that were physically similar but belonged to different families. In Experiment 2, overall similarity ratings decreased after learning, driven primarily by decreases for physically similar faces from different families. The post-learning category bias in similarity ratings was predictive of subsequent generalization success in both experiments. The results indicate that individuals formed generalizable category knowledge prior to an explicit demand to generalize and did so both when attention was directed towards category-relevant features (Experiment 1) and when attention was directed towards individuating faces within a family (Experiment 2). The results tie together research on category learning and categorical perception and extend them beyond a traditional category-learning task.
Collapse
|
4
|
Soto FA, Escobar K, Salan J. Adaptation aftereffects reveal how categorization training changes the encoding of face identity. J Vis 2020; 20:18. [PMID: 33064122 PMCID: PMC7571276 DOI: 10.1167/jov.20.10.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous research suggests that learning to categorize faces along a novel dimension changes the perceptual representation of such dimension, increasing its discriminability, its invariance, and the information used to identify faces varying along the dimension. A common interpretation of these results is that categorization training promotes the creation of novel dimensions, rather than simply the enhancement of already existing representations. Here, we trained a group of participants to categorize faces that varied along two morphing dimensions, one of them relevant to the categorization task and the other irrelevant to the task. An untrained group did not receive such categorization training. In three experiments, we used face adaptation aftereffects to explore how categorization training changes the encoding of face identities at the extremes of the category-relevant dimension and whether such training produces encoding of the category-relevant dimension as a preferred direction in face space. The pattern of results suggests that categorization training enhances the already existing norm-based coding of face identity, rather than creating novel category-relevant representations. We formalized this conclusion in a model that explains the most important results in our experiments and serves as a working hypothesis for future work in this area.
Collapse
Affiliation(s)
- Fabian A Soto
- Florida International University, Department of Psychology, Miami, FL, USA.,
| | - Karla Escobar
- Florida International University, Department of Psychology, Miami, FL, USA.,
| | | |
Collapse
|
5
|
FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behav Res Methods 2020; 52:2604-2622. [PMID: 32519291 DOI: 10.3758/s13428-020-01421-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a "morphing" sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D "anti-faces" and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.
Collapse
|