1
|
Hess U, Hareli S, Scarantino A. What is it about your face that tells me what you want from me? Emotional appeals are associated with specific mental images. Cogn Emot 2024; 38:389-398. [PMID: 37847300 DOI: 10.1080/02699931.2023.2266991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 09/25/2023] [Indexed: 10/18/2023]
Abstract
Emotional facial expressions have a communicative function. Besides information about the internal states (emotions) and the intentions of the expresser (action tendencies), they also communicate what the expresser wants the observer to do (appeals). Yet, there is very little research on the association of appeals with specific emotions. The present study has the aim to study the mental association of appeals and expressions through reverse correlation. Using reverse correlation, we estimated the observer-specific internal representations of expressions associated with four different appeals. A second group of participants rated the resulting expressions. As predicted, we found that the appeal to celebrate was uniquely associated with a happy expression and the appeal to empathize with a sad expression. A pleading appeal to stop was more strongly associated with sadness than with anger, whereas a command to stop was comparatively more strongly associated with anger. The results show that observers internally represent appeals as specific emotional expressions.
Collapse
Affiliation(s)
- Ursula Hess
- Department of Psychology, Humboldt-University of Berlin, Berlin, Germany
| | - Shlomo Hareli
- School of Business Administration, University of Haifa, Haifa, Israel
| | - Andrea Scarantino
- Department of Philosophy, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
2
|
Skog E, Meese TS, Sargent IMJ, Ormerod A, Schofield AJ. Classification images for aerial images capture visual expertise for binocular disparity and a prior for lighting from above. J Vis 2024; 24:11. [PMID: 38607637 PMCID: PMC11019598 DOI: 10.1167/jov.24.4.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 02/06/2024] [Indexed: 04/13/2024] Open
Abstract
Using a novel approach to classification images (CIs), we investigated the visual expertise of surveyors for luminance and binocular disparity cues simultaneously after screening for stereoacuity. Stereoscopic aerial images of hedges and ditches were classified in 10,000 trials by six trained remote sensing surveyors and six novices. Images were heavily masked with luminance and disparity noise simultaneously. Hedge and ditch images had reversed disparity on around half the trials meaning hedges became ditch-like and vice versa. The hedge and ditch images were also flipped vertically on around half the trials, changing the direction of the light source and completing a 2 × 2 × 2 stimulus design. CIs were generated by accumulating the noise textures associated with "hedge" and "ditch" classifications, respectively, and subtracting one from the other. Typical CIs had a central peak with one or two negative side-lobes. We found clear differences in the amplitudes and shapes of perceptual templates across groups and noise-type, with experts prioritizing binocular disparity and using this more effectively. Contrariwise, novices used luminance cues more than experts meaning that task motivation alone could not explain group differences. Asymmetries in the luminance CIs revealed individual differences for lighting interpretation, with experts less prone to assume lighting from above, consistent with their training on aerial images of UK scenes lit by a southerly sun. Our results show that (i) dual noise in images can be used to produce simultaneous CI pairs, (ii) expertise for disparity cues does not depend on stereoacuity, (iii) CIs reveal the visual strategies developed by experts, (iv) top-down perceptual biases can be overcome with long-term learning effects, and (v) CIs have practical potential for directing visual training.
Collapse
Affiliation(s)
- Emil Skog
- School of Psychology, College of Health and Life Sciences, Aston University, Birmingham, B4 7ET, UK
- Aston Laboratory for Immersive Virtual Environments, College of Health and Life Sciences, Aston University, Birmingham, B4 7ET, UK
- Department of Health, Learning and Technology, Luleå University of Technology, Luleå, Sweden
| | - Timothy S Meese
- Aston Laboratory for Immersive Virtual Environments, College of Health and Life Sciences, Aston University, Birmingham, B4 7ET, UK
- https://research.aston.ac.uk/en/persons/tim-s-meese
| | - Isabel M J Sargent
- Ordnance Survey, Adanac Drive, Southampton, SO16 0AS, UK
- Electronics and Computer Science, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- http://www.os.uk/
| | - Andrew Ormerod
- Ordnance Survey, Adanac Drive, Southampton, SO16 0AS, UK
- http://www.os.uk/
| | - Andrew J Schofield
- School of Psychology, College of Health and Life Sciences, Aston University, Birmingham, B4 7ET, UK
- Aston Laboratory for Immersive Virtual Environments, College of Health and Life Sciences, Aston University, Birmingham, B4 7ET, UK
- https://research.aston.ac.uk/en/persons/andrew-schofield
| |
Collapse
|
3
|
Hutchings RJ, Freiburger E, Sim M, Hugenberg K. Racial Prejudice Affects Representations of Facial Trustworthiness. Psychol Sci 2024; 35:263-276. [PMID: 38300733 DOI: 10.1177/09567976231225094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2024] Open
Abstract
What makes faces seem trustworthy? We investigated how racial prejudice predicts the extent to which perceivers employ racially prototypical cues to infer trustworthiness from faces. We constructed participant-level computational models of trustworthiness and White-to-Black prototypicality from U.S. college students' judgments of White (Study 1, N = 206) and Black-White morphed (Study 3, N = 386) synthetic faces. Although the average relationships between models differed across stimuli, both studies revealed that as participants' anti-Black prejudice increased and/or intergroup contact decreased, so too did participants' tendency to conflate White prototypical features with trustworthiness and Black prototypical features with untrustworthiness. Study 2 (N = 324) and Study 4 (N = 397) corroborated that untrustworthy faces constructed from participants with pro-White preferences appeared more Black prototypical to naive U.S. adults, relative to untrustworthy faces modeled from other participants. This work highlights the important role of racial biases in shaping impressions of facial trustworthiness.
Collapse
Affiliation(s)
- Ryan J Hutchings
- Department of Psychological and Brain Sciences, Indiana University-Bloomington
| | - Erin Freiburger
- Department of Psychological and Brain Sciences, Indiana University-Bloomington
| | - Mattea Sim
- Department of Psychological and Brain Sciences, Indiana University-Bloomington
| | - Kurt Hugenberg
- Department of Psychological and Brain Sciences, Indiana University-Bloomington
| |
Collapse
|
4
|
Xue S, Fernández A, Carrasco M. Featural Representation and Internal Noise Underlie the Eccentricity Effect in Contrast Sensitivity. J Neurosci 2024; 44:e0743232023. [PMID: 38050093 PMCID: PMC10860475 DOI: 10.1523/jneurosci.0743-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/21/2023] [Accepted: 09/23/2023] [Indexed: 12/06/2023] Open
Abstract
Human visual performance for basic visual dimensions (e.g., contrast sensitivity and acuity) peaks at the fovea and decreases with eccentricity. The eccentricity effect is related to the larger visual cortical surface area corresponding to the fovea, but it is unknown if differential feature tuning contributes to this eccentricity effect. Here, we investigated two system-level computations underlying the eccentricity effect: featural representation (tuning) and internal noise. Observers (both sexes) detected a Gabor embedded in filtered white noise which appeared at the fovea or one of four perifoveal locations. We used psychophysical reverse correlation to estimate the weights assigned by the visual system to a range of orientations and spatial frequencies (SFs) in noisy stimuli, which are conventionally interpreted as perceptual sensitivity to the corresponding features. We found higher sensitivity to task-relevant orientations and SFs at the fovea than that at the perifovea, and no difference in selectivity for either orientation or SF. Concurrently, we measured response consistency using a double-pass method, which allowed us to infer the level of internal noise by implementing a noisy observer model. We found lower internal noise at the fovea than that at the perifovea. Finally, individual variability in contrast sensitivity correlated with sensitivity to and selectivity for task-relevant features as well as with internal noise. Moreover, the behavioral eccentricity effect mainly reflects the foveal advantage in orientation sensitivity compared with other computations. These findings suggest that the eccentricity effect stems from a better representation of task-relevant features and lower internal noise at the fovea than that at the perifovea.
Collapse
Affiliation(s)
- Shutian Xue
- Department of Psychology, NewYork University, New York, New York 10003
| | - Antonio Fernández
- Department of Psychology, NewYork University, New York, New York 10003
| | - Marisa Carrasco
- Department of Psychology, NewYork University, New York, New York 10003
- Center for Neural Science, NewYork University, New York, New York 10003
| |
Collapse
|
5
|
Chen C, Messinger DS, Chen C, Yan H, Duan Y, Ince RAA, Garrod OGB, Schyns PG, Jack RE. Cultural facial expressions dynamically convey emotion category and intensity information. Curr Biol 2024; 34:213-223.e5. [PMID: 38141619 PMCID: PMC10831323 DOI: 10.1016/j.cub.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 10/27/2023] [Accepted: 12/01/2023] [Indexed: 12/25/2023]
Abstract
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.
Collapse
Affiliation(s)
- Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK.
| | - Daniel S Messinger
- Departments of Psychology, Pediatrics, and Electrical & Computer Engineering, University of Miami, 5665 Ponce De Leon Blvd, Coral Gables, FL 33146, USA
| | - Cheng Chen
- Foreign Language Department, Teaching Centre for General Courses, Chengdu Medical College, 601 Tianhui Street, Chengdu 610083, China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, North Jianshe Road, Chengdu 611731, China
| | - Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G B Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Rachael E Jack
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
6
|
Brodbeck C, Das P, Gillis M, Kulasingham JP, Bhattasali S, Gaston P, Resnik P, Simon JZ. Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions. eLife 2023; 12:e85012. [PMID: 38018501 PMCID: PMC10783870 DOI: 10.7554/elife.85012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/24/2023] [Indexed: 11/30/2023] Open
Abstract
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group-level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: (1) Is there a significant neural representation corresponding to this predictor variable? And if so, (2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.
Collapse
Affiliation(s)
| | - Proloy Das
- Stanford UniversityStanfordUnited States
| | | | | | | | | | - Philip Resnik
- University of Maryland, College ParkCollege ParkUnited States
| | | |
Collapse
|
7
|
Tangtartharakul G, Morgan CA, Rushton SK, Schwarzkopf DS. Retinotopic connectivity maps of human visual cortex with unconstrained eye movements. Hum Brain Mapp 2023; 44:5221-5237. [PMID: 37555758 PMCID: PMC10543111 DOI: 10.1002/hbm.26446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 06/27/2023] [Accepted: 07/26/2023] [Indexed: 08/10/2023] Open
Abstract
Human visual cortex contains topographic visual field maps whose organization can be revealed with retinotopic mapping. Unfortunately, constraints posed by standard mapping hinder its use in patients, atypical subject groups, and individuals at either end of the lifespan. This severely limits the conclusions we can draw about visual processing in such individuals. Here, we present a novel data-driven method to estimate connective fields, resulting in fine-grained maps of the functional connectivity between brain areas. We find that inhibitory connectivity fields accompany, and often surround facilitatory fields. The visual field extent of these inhibitory subfields falls off with cortical magnification. We further show that our method is robust to large eye movements and myopic defocus. Importantly, freed from the controlled stimulus conditions in standard mapping experiments, using entertaining stimuli and unconstrained eye movements our approach can generate retinotopic maps, including the periphery visual field hitherto only possible to map with special stimulus displays. Generally, our results show that the connective field method can gain knowledge about retinotopic architecture of visual cortex in patients and participants where this is at best difficult and confounded, if not impossible, with current methods.
Collapse
Affiliation(s)
- Gene Tangtartharakul
- School of Optometry and Vision ScienceUniversity of AucklandAucklandNew Zealand
- School of Psychology and Centre for Brain ResearchUniversity of AucklandAucklandNew Zealand
| | - Catherine A. Morgan
- School of Psychology and Centre for Brain ResearchUniversity of AucklandAucklandNew Zealand
- Centre for Advanced MRIUniServices LimitedAucklandNew Zealand
| | | | - D. Samuel Schwarzkopf
- School of Optometry and Vision ScienceUniversity of AucklandAucklandNew Zealand
- Experimental PsychologyUniversity College LondonLondonUK
| |
Collapse
|
8
|
Lehnert J, Cha K, Halperin J, Yang K, Zheng DF, Khadra A, Cook EP, Krishnaswamy A. Visual attention to features and space in mice using reverse correlation. Curr Biol 2023; 33:3690-3701.e4. [PMID: 37611588 DOI: 10.1016/j.cub.2023.07.060] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 05/17/2023] [Accepted: 07/27/2023] [Indexed: 08/25/2023]
Abstract
Visual attention allows the brain to evoke behaviors based on the most important visual features. Mouse models offer immense potential to gain a circuit-level understanding of this phenomenon, yet how mice distribute attention across features and locations is not well understood. Here, we describe a new approach to address this limitation by training mice to detect weak vertical bars in a background of dynamic noise while spatial cues manipulate their attention. By adapting a reverse-correlation method from human studies, we linked behavioral decisions to stimulus features and locations. We show that mice deployed attention to a small rostral region of the visual field. Within this region, mice attended to multiple features (orientation, spatial frequency, contrast) that indicated the presence of weak vertical bars. This attentional tuning grew with training, multiplicatively scaled behavioral sensitivity, approached that of an ideal observer, and resembled the effects of attention in humans. Taken together, we demonstrate that mice can simultaneously attend to multiple features and locations of a visual stimulus.
Collapse
Affiliation(s)
- Jonas Lehnert
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada; Quantitative Life Sciences, McGill University, Montreal, QC H3A 1E3, Canada
| | - Kuwook Cha
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada
| | - Jamie Halperin
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada
| | - Kerry Yang
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada
| | - Daniel F Zheng
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada
| | - Anmar Khadra
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada; Quantitative Life Sciences, McGill University, Montreal, QC H3A 1E3, Canada; Centre for Applied Mathematics in Bioscience and Medicine, McGill University, Montreal, QC H3G 0B1, Canada
| | - Erik P Cook
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada; Quantitative Life Sciences, McGill University, Montreal, QC H3A 1E3, Canada; Centre for Applied Mathematics in Bioscience and Medicine, McGill University, Montreal, QC H3G 0B1, Canada.
| | - Arjun Krishnaswamy
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada; Quantitative Life Sciences, McGill University, Montreal, QC H3A 1E3, Canada.
| |
Collapse
|
9
|
Petsko CD, Kteily NS. Political (Meta-)Dehumanization in Mental Representations: Divergent Emphases in the Minds of Liberals Versus Conservatives. Pers Soc Psychol Bull 2023:1461672231180971. [PMID: 37415508 DOI: 10.1177/01461672231180971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/08/2023]
Abstract
We conducted two reverse-correlation studies, as well as two pilot studies reported in the online supplement (total N = 1,411), on the topics of (a) whether liberals and conservatives differ in the types of dehumanization that they cognitively emphasize when mentally representing one another, and if so, (b) whether liberals and conservatives are sensitive to how they are represented in the minds of political outgroup members. Results suggest that partisans indeed differ in the types of dehumanization that they cognitively emphasize when mentally representing one another: whereas conservatives' dehumanization of liberals emphasizes immaturity (vs. savagery), liberals' dehumanization of conservatives more strongly emphasizes savagery (vs. immaturity). In addition, results suggest that partisans may be sensitive to how they are represented. That is, partisans' meta-representations-their representations of how the outgroup represents the ingroup-appear to accurately index the relative emphases of these two dimensions in the minds of political outgroup members.
Collapse
|
10
|
Oliver A, Tracy RE, Young SG, Wout DA. Black + White = Prototypically Black: Visualizing Black and White People's Mental Representations of Black-White Biracial People. Pers Soc Psychol Bull 2023:1461672231164026. [PMID: 37052339 DOI: 10.1177/01461672231164026] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
Utilizing reverse correlation, we investigated Black and White participants' mental representations of Black-White Biracial people. Across 200 trails, Black and White participants chose which of two faces best fit specific social categories. Using these decisions, we visually estimated Black and White people's mental representations of Biracial people by generating classification images (CIs). Independent raters blind to condition determined that White CI generators' Biracial CI was prototypically Blacker (i.e., more Afrocentric facial features and darker skin tone) than Black CI generators' Biracial CI (Study 1a/b). Furthermore, independent raters could not distinguish between White CI generators' Black and Biracial CIs, a bias not exhibited by Black CI generators (Study 2). A separate task demonstrated that prejudiced White participants allocated fewer imaginary funds to the more prototypically Black Biracial CI (Study 3), providing converging evidence. How phenotypicality bias, the outgroup homogeneity effect, and hypodescent influences people's mental images of ingroup/outgroup members is discussed.
Collapse
|
11
|
Kim J, Moon K, Kim S, Kim H, Ko YG. The relationship between mental representations of self and social evaluation: Examining the validity and usefulness of visual proxies of self-image. Front Psychol 2023; 13:937905. [PMID: 36710754 PMCID: PMC9878293 DOI: 10.3389/fpsyg.2022.937905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 12/13/2022] [Indexed: 01/15/2023] Open
Abstract
Reverse correlation (RC) method has been recently used to visualize mental representations of self. Previous studies have mainly examined the relationship between psychological aspects measured by self-reports and classification images of self (self-CIs), which are visual proxies of self-image generated through the RC method. In Experiment 1 (N = 118), to extend the validity of self-CIs, we employed social evaluation on top of self-reports as criterion variables and examined the relationship between self-CIs and social evaluation provided by clinical psychologists. Experiment 1 revealed that the valence ratings of self-CIs evaluated by independent raters predicted social evaluation after controlling for the effects of self-reported self-esteem and extraversion. Furthermore, in Experiment 2 (N = 127), we examined whether a computational scoring method - a method to assess self-CIs without employing independent raters - could be applied to evaluate the valence of participants' self-CIs. Experiment 2 found that the computational scores of self-CIs were comparable to independent valence ratings of self-CIs. We provide evidence that self-CIs can add independent information to self-reports in predicting social evaluation. We also suggest that the computational scoring method can complement the independent rating process of self-CIs. Overall, our findings reveal that self-CIs are a valid and useful tool to examine self-image more profoundly.
Collapse
Affiliation(s)
- Jinwon Kim
- School of Psychology, Korea University, Seoul, South Korea
| | - Kibum Moon
- School of Psychology, Korea University, Seoul, South Korea
| | - Sojeong Kim
- Department of Psychiatry, Korea University College of Medicine, Seoul, South Korea
| | - Hackjin Kim
- School of Psychology, Korea University, Seoul, South Korea
| | - Young-gun Ko
- School of Psychology, Korea University, Seoul, South Korea,*Correspondence: Young-gun Ko, ✉
| |
Collapse
|
12
|
Abstract
OBJECTIVE The Temporal Response Function (TRF) is a linear model of neural activity time-locked to continuous stimuli, including continuous speech. TRFs based on speech envelopes typically have distinct components that have provided remarkable insights into the cortical processing of speech. However, current methods may lead to less than reliable estimates of single-subject TRF components. Here, we compare two established methods, in TRF component estimation, and also propose novel algorithms that utilize prior knowledge of these components, bypassing the full TRF estimation. METHODS We compared two established algorithms, ridge and boosting, and two novel algorithms based on Subspace Pursuit (SP) and Expectation Maximization (EM), which directly estimate TRF components given plausible assumptions regarding component characteristics. Single-channel, multi-channel, and source-localized TRFs were fit on simulations and real magnetoencephalographic data. Performance metrics included model fit and component estimation accuracy. RESULTS Boosting and ridge have comparable performance in component estimation. The novel algorithms outperformed the others in simulations, but not on real data, possibly due to the plausible assumptions not actually being met. Ridge had slightly better model fits on real data compared to boosting, but also more spurious TRF activity. CONCLUSION Results indicate that both smooth (ridge) and sparse (boosting) algorithms perform comparably at TRF component estimation. The SP and EM algorithms may be accurate, but rely on assumptions of component characteristics. SIGNIFICANCE This systematic comparison establishes the suitability of widely used and novel algorithms for estimating robust TRF components, which is essential for improved subject-specific investigations into the cortical processing of speech.
Collapse
|
13
|
Wang L, Ong JH, Ponsot E, Hou Q, Jiang C, Liu F. Mental representations of speech and musical pitch contours reveal a diversity of profiles in autism spectrum disorder. Autism 2022; 27:629-646. [PMID: 35848413 PMCID: PMC10074762 DOI: 10.1177/13623613221111207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
LAY ABSTRACT As a key auditory attribute of sounds, pitch is ubiquitous in our everyday listening experience involving language, music and environmental sounds. Given its critical role in auditory processing related to communication, numerous studies have investigated pitch processing in autism spectrum disorder. However, the findings have been mixed, reporting either enhanced, typical or impaired performance among autistic individuals. By investigating top-down comparisons of internal mental representations of pitch contours in speech and music, this study shows for the first time that, while autistic individuals exhibit diverse profiles of pitch processing compared to non-autistic individuals, their mental representations of pitch contours are typical across domains. These findings suggest that pitch-processing mechanisms are shared across domains in autism spectrum disorder and provide theoretical implications for using music to improve speech for those autistic individuals who have language problems.
Collapse
Affiliation(s)
- Li Wang
- University of Reading, UK.,The Chinese University of Hong Kong, Hong Kong
| | | | | | - Qingqi Hou
- Nanjing Normal University of Special Education, China
| | | | | |
Collapse
|
14
|
Urale PWB, Puckett AM, York A, Arnold D, Schwarzkopf DS. Highly accurate retinotopic maps of the physiological blind spot in human visual cortex. Hum Brain Mapp 2022; 43:5111-5125. [PMID: 35796159 PMCID: PMC9812231 DOI: 10.1002/hbm.25996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 05/18/2022] [Accepted: 06/15/2022] [Indexed: 01/15/2023] Open
Abstract
The physiological blind spot is a naturally occurring scotoma corresponding with the optic disc in the retina of each eye. Even during monocular viewing, observers are usually oblivious to the scotoma, in part because the visual system extrapolates information from the surrounding area. Unfortunately, studying this visual field region with neuroimaging has proven difficult, as it occupies only a small part of retinotopic cortex. Here, we used functional magnetic resonance imaging and a novel data-driven method for mapping the retinotopic organization in and around the blind spot representation in V1. Our approach allowed for highly accurate reconstructions of the extent of an observer's blind spot, and out-performed conventional model-based analyses. This method opens exciting opportunities to study the plasticity of receptive fields after visual field loss, and our data add to evidence suggesting that the neural circuitry responsible for impressions of perceptual completion across the physiological blind spot most likely involves regions of extrastriate cortex-beyond V1.
Collapse
Affiliation(s)
- Poutasi W. B. Urale
- School of Optometry & Vision ScienceUniversity of AucklandAucklandNew Zealand
| | - Alexander M. Puckett
- School of PsychologyUniversity of QueenslandBrisbaneQueenslandAustralia
- Queensland Brain InstituteUniversity of QueenslandBrisbaneQueenslandAustralia
| | - Ashley York
- School of PsychologyUniversity of QueenslandBrisbaneQueenslandAustralia
- Queensland Brain InstituteUniversity of QueenslandBrisbaneQueenslandAustralia
| | - Derek Arnold
- School of PsychologyUniversity of QueenslandBrisbaneQueenslandAustralia
- Queensland Brain InstituteUniversity of QueenslandBrisbaneQueenslandAustralia
| | - D. Samuel Schwarzkopf
- School of Optometry & Vision ScienceUniversity of AucklandAucklandNew Zealand
- Experimental PsychologyUniversity College LondonLondonUnited Kingdom
| |
Collapse
|
15
|
Liu M, Duan Y, Ince RAA, Chen C, Garrod OGB, Schyns PG, Jack RE. Facial expressions elicit multiplexed perceptions of emotion categories and dimensions. Curr Biol 2022; 32:200-209.e6. [PMID: 34767768 PMCID: PMC8751635 DOI: 10.1016/j.cub.2021.10.035] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/07/2021] [Accepted: 10/14/2021] [Indexed: 11/22/2022]
Abstract
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.
Collapse
Affiliation(s)
- Meng Liu
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Yaocong Duan
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Robin A A Ince
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Chaona Chen
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Oliver G B Garrod
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Rachael E Jack
- School of Psychology & Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
16
|
Maister L, De Beukelaer S, Longo MR, Tsakiris M. The Self in the Mind's Eye: Revealing How We Truly See Ourselves Through Reverse Correlation. Psychol Sci 2021; 32:1965-1978. [PMID: 34761992 DOI: 10.1177/09567976211018618] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Is there a way to visually depict the image people "see" of themselves in their minds' eyes? And if so, what can these mental images tell us about ourselves? We used a computational reverse-correlation technique to explore individuals' mental "self-portraits" of their faces and body shapes in an unbiased, data-driven way (total N = 116 adults). Self-portraits were similar to individuals' real faces but, importantly, also contained clues to each person's self-reported personality traits, which were reliably detected by external observers. Furthermore, people with higher social self-esteem produced more true-to-life self-portraits. Unlike face portraits, body portraits had negligible relationships with individuals' actual body shape, but as with faces, they were influenced by people's beliefs and emotions. We show how psychological beliefs and attitudes about oneself bias the perceptual representation of one's appearance and provide a unique window into the internal mental self-representation-findings that have important implications for mental health and visual culture.
Collapse
Affiliation(s)
| | | | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London
| | - Manos Tsakiris
- The Warburg Institute, School of Advanced Study, University of London.,Department of Psychology, Royal Holloway, University of London.,Department of Behavioural and Cognitive Sciences, Faculty of Humanities, Education and Social Sciences, University of Luxembourg
| |
Collapse
|
17
|
Daube C, Xu T, Zhan J, Webb A, Ince RA, Garrod OG, Schyns PG. Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity. Patterns (N Y) 2021; 2:100348. [PMID: 34693374 PMCID: PMC8515012 DOI: 10.1016/j.patter.2021.100348] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 11/30/2020] [Accepted: 08/20/2021] [Indexed: 01/24/2023]
Abstract
Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.
Collapse
Affiliation(s)
- Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Tian Xu
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, England, UK
| | - Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Andrew Webb
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G.B. Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
18
|
Okazawa G, Sha L, Kiani R. Linear Integration of Sensory Evidence over Space and Time Underlies Face Categorization. J Neurosci 2021; 41:7876-93. [PMID: 34326145 DOI: 10.1523/JNEUROSCI.3055-20.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 07/08/2021] [Accepted: 07/21/2021] [Indexed: 11/21/2022] Open
Abstract
Visual object recognition relies on elaborate sensory processes that transform retinal inputs to object representations, but it also requires decision-making processes that read out object representations and function over prolonged time scales. The computational properties of these decision-making processes remain underexplored for object recognition. Here, we study these computations by developing a stochastic multifeature face categorization task. Using quantitative models and tight control of spatiotemporal visual information, we demonstrate that human subjects (five males, eight females) categorize faces through an integration process that first linearly adds the evidence conferred by task-relevant features over space to create aggregated momentary evidence and then linearly integrates it over time with minimum information loss. Discrimination of stimuli along different category boundaries (e.g., identity or expression of a face) is implemented by adjusting feature weights of spatial integration. This linear but flexible integration process over space and time bridges past studies on simple perceptual decisions to complex object recognition behavior.SIGNIFICANCE STATEMENT Although simple perceptual decision-making such as discrimination of random dot motion has been successfully explained as accumulation of sensory evidence, we lack rigorous experimental paradigms to study the mechanisms underlying complex perceptual decision-making such as discrimination of naturalistic faces. We develop a stochastic multifeature face categorization task as a systematic approach to quantify the properties and potential limitations of the decision-making processes during object recognition. We show that human face categorization could be modeled as a linear integration of sensory evidence over space and time. Our framework to study object recognition as a spatiotemporal integration process is broadly applicable to other object categories and bridges past studies of object recognition and perceptual decision-making.
Collapse
|
19
|
Wilmott JP, Michel MM. Transsaccadic integration of visual information is predictive, attention-based, and spatially precise. J Vis 2021; 21:14. [PMID: 34374744 PMCID: PMC8366295 DOI: 10.1167/jov.21.8.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 03/23/2021] [Indexed: 11/29/2022] Open
Abstract
Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal "psychophysical kernel" characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.
Collapse
Affiliation(s)
- James P Wilmott
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA
| | - Melchi M Michel
- Department of Psychology and Center for Cognitive Science (RuCCS), Rutgers University, Piscataway, NJ, USA
- https://mmmlab.org/
| |
Collapse
|
20
|
Zhan J, Liu M, Garrod OGB, Daube C, Ince RAA, Jack RE, Schyns PG. Modeling individual preferences reveals that face beauty is not universally perceived across cultures. Curr Biol 2021; 31:2243-2252.e6. [PMID: 33798430 PMCID: PMC8162177 DOI: 10.1016/j.cub.2021.03.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 01/15/2021] [Accepted: 03/03/2021] [Indexed: 12/15/2022]
Abstract
Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5, 6, 7 including representing the diversity of beauty preferences within and across cultures.8, 9, 10, 11, 12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents. We modeled individual preferences for attractive faces in two cultures Attractive face features differ from the face average and sexual dimorphism Instead, culture and individual preferences shape attractive face features Attractive face features from a culture are used to judge other-ethnicity faces
Collapse
Affiliation(s)
- Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK.
| | - Meng Liu
- School of Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Oliver G B Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Rachael E Jack
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK; School of Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK; School of Psychology, University of Glasgow, Glasgow, Scotland G12 8QB, UK.
| |
Collapse
|
21
|
Jalali S, Martin SE, Ghose T, Buscombe RM, Solomon JA, Yarrow K. Information Accrual From the Period Preceding Racket-Ball Contact for Tennis Ground Strokes: Inferences From Stochastic Masking. Front Psychol 2019; 10:1969. [PMID: 31507503 PMCID: PMC6718709 DOI: 10.3389/fpsyg.2019.01969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 08/12/2019] [Indexed: 12/02/2022] Open
Abstract
Previous research suggests the existence of an expert anticipatory advantage, whereby skilled sportspeople are able to predict an upcoming action by utilizing cues contained in their opponent’s body kinematics. This ability is often inferred from “occlusion” experiments: information is systematically removed from first-person videos of an opponent, for example, by stopping a tennis video at the point of racket-ball contact, yet performance, such as discrimination of shot direction, remains above chance. In this study, we assessed the expert anticipatory advantage for tennis ground strokes via a modified approach, known as “bubbles,” in which information is randomly removed from videos in each trial. The bubbles profile is then weighted by trial outcome (i.e., a correct vs. incorrect discrimination) and combined across trials into a classification array, revealing the potential cues informing the decision. In two experiments (both with N = 34 skilled tennis players) we utilized either temporal or spatial bubbles, applying them to videos running from 0.8 to 0 s before the point of racket-ball contact (cf. Jalali et al., 2018). Results from the spatial experiment were somewhat suggestive of accrual from the torso region of the body, but were not compelling. Results from the temporal experiment, on the other hand, were clear: information was accrued mainly during the period immediately prior to racket-ball contact. This result is broadly consistent with prior work using nonstochastic approaches to video manipulation, and cannot be an artifact of temporal smear from information accrued after racket-ball contact, because no such information was present.
Collapse
Affiliation(s)
- Sepehr Jalali
- Department of Psychology, City, University of London, London, United Kingdom
| | - Sian E Martin
- Department of Psychology, City, University of London, London, United Kingdom
| | - Tandra Ghose
- Department of Psychology, Technische Universität Kaiserslautern, Kaiserslautern, Germany
| | - Richard M Buscombe
- School of Health Sport and Bioscience, University of East London, London, United Kingdom
| | - Joshua A Solomon
- Centre for Applied Vision Science, City, University of London, London, United Kingdom
| | - Kielan Yarrow
- Department of Psychology, City, University of London, London, United Kingdom
| |
Collapse
|
22
|
Abstract
Converging results suggest that perception is controlled by rhythmic processes in the brain. In the auditory domain, neuroimaging studies show that the perception of sounds is shaped by rhythmic activity prior to the stimulus, and electrophysiological recordings have linked delta and theta band activity to the functioning of individual neurons. These results have promoted theories of rhythmic modes of listening and generally suggest that the perceptually relevant encoding of acoustic information is structured by rhythmic processes along auditory pathways. A prediction from this perspective-which so far has not been tested-is that such rhythmic processes also shape how acoustic information is combined over time to judge extended soundscapes. The present study was designed to directly test this prediction. Human participants judged the overall change in perceived frequency content in temporally extended (1.2-1.8 s) soundscapes, while the perceptual use of the available sensory evidence was quantified using psychophysical reverse correlation. Model-based analysis of individual participant's perceptual weights revealed a rich temporal structure, including linear trends, a U-shaped profile tied to the overall stimulus duration, and importantly, rhythmic components at the time scale of 1-2 Hz. The collective evidence found here across four versions of the experiment supports the notion that rhythmic processes operating on the delta time scale structure how perception samples temporally extended acoustic scenes.
Collapse
Affiliation(s)
- Christoph Kayser
- Department for Cognitive Neuroscience & Cognitive Interaction Technology, Center of Excellence, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
23
|
Zhan J, Ince RAA, van Rijsbergen N, Schyns PG. Dynamic Construction of Reduced Representations in the Brain for Perceptual Decision Behavior. Curr Biol 2019; 29:319-326.e4. [PMID: 30639108 PMCID: PMC6345582 DOI: 10.1016/j.cub.2018.11.049] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2018] [Revised: 10/23/2018] [Accepted: 11/20/2018] [Indexed: 01/03/2023]
Abstract
Over the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1-14], where visual categorizations unfold over the first 250 ms of processing [15-19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations-e.g. categorizing the same object as "a car" or "a Porsche." While we partly understand where and when these categorizations happen in the occipito-ventral pathway, the next challenge is to unravel how these categorizations happen. That is, how does high-dimensional input collapse in the occipito-ventral pathway to become low dimensional representations that guide behavior? To address this, we investigated what information the brain processes in a visual perception task and visualized the dynamic representation of this information in brain activity. To do so, we developed stimulus information representation (SIR), an information theoretic framework, to tease apart stimulus information that supports behavior from that which does not. We then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using SIR, we demonstrate that a rapid (∼170 ms) reduction of behaviorally irrelevant information occurs in the occipital cortex and that representations of the information that supports distinct behaviors are constructed in the right fusiform gyrus (rFG). Our results thus highlight how SIR can be used to investigate the component processes of the brain by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm.
Collapse
Affiliation(s)
- Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Nicola van Rijsbergen
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom; School of Psychology, University of Glasgow, 62 Hillhead Street, Glasgow, Scotland G12 8QB, United Kingdom.
| |
Collapse
|
24
|
Matteucci G, Bellacosa Marotti R, Riggi M, Rosselli FB, Zoccolan D. Nonlinear Processing of Shape Information in Rat Lateral Extrastriate Cortex. J Neurosci 2019; 39:1649-70. [PMID: 30617210 DOI: 10.1523/JNEUROSCI.1938-18.2018] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 11/28/2018] [Accepted: 12/02/2018] [Indexed: 11/21/2022] Open
Abstract
In rodents, the progression of extrastriate areas located laterally to primary visual cortex (V1) has been assigned to a putative object-processing pathway (homologous to the primate ventral stream), based on anatomical considerations. Recently, we found functional support for such attribution (Tafazoli et al., 2017), by showing that this cortical progression is specialized for coding object identity despite view changes, the hallmark property of a ventral-like pathway. Here, we sought to clarify what computations are at the base of such specialization. To this aim, we performed multielectrode recordings from V1 and laterolateral area LL (at the apex of the putative ventral-like hierarchy) of male adult rats, during the presentation of drifting gratings and noise movies. We found that the extent to which neuronal responses were entrained to the phase of the gratings sharply dropped from V1 to LL, along with the quality of the receptive fields inferred through reverse correlation. Concomitantly, the tendency of neurons to respond to different oriented gratings increased, whereas the sharpness of orientation tuning declined. Critically, these trends are consistent with the nonlinear summation of visual inputs that is expected to take place along the ventral stream, according to the predictions of hierarchical models of ventral computations and a meta-analysis of the monkey literature. This suggests an intriguing homology between the mechanisms responsible for building up shape selectivity and transformation tolerance in the visual cortex of primates and rodents, reasserting the potential of the latter as models to investigate ventral stream functions at the circuitry level.SIGNIFICANCE STATEMENT Despite the growing popularity of rodents as models of visual functions, it remains unclear whether their visual cortex contains specialized modules for processing shape information. To addresses this question, we compared how neuronal tuning evolves from rat primary visual cortex (V1) to a downstream visual cortical region (area LL) that previous work has implicated in shape processing. In our experiments, LL neurons displayed a stronger tendency to respond to drifting gratings with different orientations while maintaining a sustained response across the whole duration of the drift cycle. These trends match the increased complexity of pattern selectivity and the augmented tolerance to stimulus translation found in monkey visual temporal cortex, thus revealing a homology between shape processing in rodents and primates.
Collapse
|
25
|
Oliveira Ferreira de Souza B, Casanova C. Stronger responses to darks along the ventral pathway of the cat visual cortex. Eur J Neurosci 2018; 49:1102-1114. [PMID: 30549336 DOI: 10.1111/ejn.14297] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 11/13/2018] [Accepted: 11/22/2018] [Indexed: 11/28/2022]
Abstract
Light increments (brights) and decrements (darks) are differently processed throughout the early visual system. It is well known that a bias towards faster and stronger responses to darks is present in the retina, lateral geniculate nucleus and primary visual cortex. In humans, psychophysical and neurophysiological data indicate that darks are better detected than brights, suggesting that the dark bias found in early visual areas is transmitted across the cortical hierarchy. Here, we tested this assumption by investigating the spatiotemporal features of responses to brights and darks in area 21a, a gateway area of the cat ventral stream, using reverse correlation analysis of a sparse noise stimulus. The receptive field of most 21a neurons exhibited larger dark subfields. Additionally, the amplitude of the responses to darks was considerably greater than those evoked by brights. In the temporal domain, no differences were found between the response peak latency. Thus, the present study supports the notion that bright/dark asymmetries are transmitted throughout the cortical hierarchy and further, that the luminance processing varies as a function of the position in the cortical hierarchy, dark preference being strongly enhanced (in the spatial domain and response amplitude) along the ventral pathway.
Collapse
|
26
|
Abstract
Sensory systems relay information about the world to the brain, which enacts behaviors through motor outputs. To maximize information transmission, sensory systems discard redundant information through adaptation to the mean and variance of the environment. The behavioral consequences of sensory adaptation to environmental variance have been largely unexplored. Here, we study how larval fruit flies adapt sensory-motor computations underlying navigation to changes in the variance of visual and olfactory inputs. We show that variance adaptation can be characterized by rescaling of the sensory input and that for both visual and olfactory inputs, the temporal dynamics of adaptation are consistent with optimal variance estimation. In multisensory contexts, larvae adapt independently to variance in each sense, and portions of the navigational pathway encoding mixed odor and light signals are also capable of variance adaptation. Our results suggest multiplication as a mechanism for odor-light integration.
Collapse
Affiliation(s)
- Ruben Gepner
- Department of Physics, New York University, New York, United States
| | - Jason Wolk
- Department of Physics, New York University, New York, United States
| | | | - Sophie Dvali
- Department of Physics, New York University, New York, United States
| | - Marc Gershow
- Department of Physics, New York University, New York, United States.,Center for Neural Science, New York University, New York, United States.,Neuroscience Institute, New York University, New York, United States
| |
Collapse
|
27
|
Jalali S, Martin SE, Murphy CP, Solomon JA, Yarrow K. Classification Videos Reveal the Visual Information Driving Complex Real-World Speeded Decisions. Front Psychol 2018; 9:2229. [PMID: 30524338 PMCID: PMC6256113 DOI: 10.3389/fpsyg.2018.02229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 10/29/2018] [Indexed: 11/13/2022] Open
Abstract
Humans can rapidly discriminate complex scenarios as they unfold in real time, for example during law enforcement or, more prosaically, driving and sport. Such decision-making improves with experience, as new sources of information are exploited. For example, sports experts are able to predict the outcome of their opponent's next action (e.g., a tennis stroke) based on kinematic cues "read" from preparatory body movements. Here, we explore the use of psychophysical classification-image techniques to reveal how participants interpret complex scenarios. We used sport as a test case, filming tennis players serving and hitting ground strokes, each with two possible directions. These videos were presented to novices and club-level amateurs, running from 0.8 s before to 0.2 s after racquet-ball contact. During practice, participants anticipated shot direction under a time limit targeting 90% accuracy. Participants then viewed videos through Gaussian windows ("bubbles") placed at random in the temporal, spatial or spatiotemporal domains. Comparing bubbles from correct and incorrect trials revealed how information from different regions contributed toward a correct response. Temporally, only later frames of the videos supported accurate responding (from ~0.05 s before ball contact to 0.1 s afterwards). Spatially, information was accrued from the ball's trajectory and from the opponent's head. Spatiotemporal bubbles again highlighted ball trajectory information, but seemed susceptible to an attentional cuing artifact, which may caution against their wider use. Overall, bubbles proved effective in revealing regions of information accrual, and could thus be applied to help understand choice behavior in a range of ecologically valid situations.
Collapse
Affiliation(s)
- Sepehr Jalali
- Department of Psychology, City, University of London, London, United Kingdom
| | - Sian E Martin
- Department of Psychology, City, University of London, London, United Kingdom
| | - Colm P Murphy
- Expert Performance and Skill Acquisition Research Group, School of Sport, Health and Applied Science, St Mary's University, Twickenham, United Kingdom
| | - Joshua A Solomon
- Centre for Applied Vision Science, City, University of London, London, United Kingdom
| | - Kielan Yarrow
- Department of Psychology, City, University of London, London, United Kingdom
| |
Collapse
|
28
|
Levi AJ, Yates JL, Huk AC, Katz LN. Strategic and Dynamic Temporal Weighting for Perceptual Decisions in Humans and Macaques. eNeuro 2018; 5:ENEURO. [PMID: 30406190 DOI: 10.1523/ENEURO.0169-18.2018] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 08/08/2018] [Accepted: 09/01/2018] [Indexed: 12/14/2022] Open
Abstract
Perceptual decision-making is often modeled as the accumulation of sensory evidence over time. Recent studies using psychophysical reverse correlation have shown that even though the sensory evidence is stationary over time, subjects may exhibit a time-varying weighting strategy, weighting some stimulus epochs more heavily than others. While previous work has explained time-varying weighting as a consequence of static decision mechanisms (e.g., decision bound or leak), here we show that time-varying weighting can reflect strategic adaptation to stimulus statistics, and thus can readily take a number of forms. We characterized the temporal weighting strategies of humans and macaques performing a motion discrimination task in which the amount of information carried by the motion stimulus was manipulated over time. Both species could adapt their temporal weighting strategy to match the time-varying statistics of the sensory stimulus. When early stimulus epochs had higher mean motion strength than late, subjects adopted a pronounced early weighting strategy, where early information was weighted more heavily in guiding perceptual decisions. When the mean motion strength was greater in later stimulus epochs, in contrast, subjects shifted to a marked late weighting strategy. These results demonstrate that perceptual decisions involve a temporally flexible weighting process in both humans and monkeys, and introduce a paradigm with which to manipulate sensory weighting in decision-making tasks.
Collapse
|
29
|
Pleskac TJ, Yu S, Hopwood C, Liu T. Mechanisms of deliberation during preferential choice: Perspectives from computational modeling and individual differences. ACTA ACUST UNITED AC 2018; 6:77-107. [PMID: 30643838 DOI: 10.1037/dec0000092] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Computational models of decision making typically assume as people deliberate between options they mentally simulate outcomes from each one and integrate valuations of these outcomes to form a preference. In two studies, we investigated this deliberation process using a task where participants make a series of decisions between a certain and an uncertain option, which were shown as dynamic visual samples that represented possible payoffs. We developed and validated a method of reverse correlational analysis for the task that measures how this time-varying signal was used to make a choice. The first study used this method to examine how information processing during deliberation differed from a perceptual analog of the task. We found participants were less sensitive to each sample of information during preferential choice. In a second study, we investigated how these different measures of deliberation were related to impulsivity and drug and alcohol use. We found that while properties of the deliberation process were not related to impulsivity, some aspects of the process may be related to substance use. In particular, alcohol abuse was related to diminished sensitivity to the payoff information and drug use was related to how the initial starting point of evidence accumulation. We synthesized our results with a rank-dependent sequential sampling model which suggests that participants allocated more attentional weight to larger potential payoffs during preferential choice.
Collapse
Affiliation(s)
| | - Shuli Yu
- Max Planck Institute for Human Development, Berlin, Germany
| | | | | |
Collapse
|
30
|
Liu M, Sharma AK, Shaevitz JW, Leifer AM. Temporal processing and context dependency in Caenorhabditis elegans response to mechanosensation. eLife 2018; 7:e36419. [PMID: 29943731 PMCID: PMC6054533 DOI: 10.7554/elife.36419] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 06/10/2018] [Indexed: 11/13/2022] Open
Abstract
A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps to reveal the brain's underlying computations. We investigate how the nematode Caenorhabditis elegans responds to time-varying mechanosensory signals using a high-throughput optogenetic assay and automated behavior quantification. We find that the behavioral response is tuned to temporal properties of mechanosensory signals, such as their integral and derivative, that extend over many seconds. Mechanosensory signals, even in the same neurons, can be tailored to elicit different behavioral responses. Moreover, we find that the animal's response also depends on its behavioral context. Most dramatically, the animal ignores all tested mechanosensory stimuli during turns. Finally, we present a linear-nonlinear model that predicts the animal's behavioral response to stimulus.
Collapse
Affiliation(s)
- Mochi Liu
- Lewis-Sigler Institute for Integrative GenomicsPrinceton UniversityNew JerseyUnited States
| | - Anuj K Sharma
- Department of PhysicsPrinceton UniversityNew JerseyUnited States
| | - Joshua W Shaevitz
- Lewis-Sigler Institute for Integrative GenomicsPrinceton UniversityNew JerseyUnited States
- Department of PhysicsPrinceton UniversityNew JerseyUnited States
| | - Andrew M Leifer
- Department of PhysicsPrinceton UniversityNew JerseyUnited States
- Princeton Neuroscience InstitutePrinceton UniversityNew JerseyUnited States
| |
Collapse
|
31
|
Abstract
As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| | - Benjamin J Lansdell
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; WRF UW Institute for Neuroengineering, University of Washington, Seattle, WA 98195, USA
| | - David Kleinfeld
- Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Section of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA; Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
32
|
Abstract
A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.
Collapse
Affiliation(s)
| | - Rachael E Jack
- 2 School of Psychology, University of Glasgow.,3 Institute of Neuroscience and Psychology, University of Glasgow
| | | | | | - Jared D Martin
- 4 Department of Psychology, University of Wisconsin-Madison
| | | |
Collapse
|
33
|
Crosse MJ, Di Liberto GM, Bednar A, Lalor EC. The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Front Hum Neurosci 2016; 10:604. [PMID: 27965557 PMCID: PMC5127806 DOI: 10.3389/fnhum.2016.00604] [Citation(s) in RCA: 260] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 11/11/2016] [Indexed: 01/05/2023] Open
Abstract
Understanding how brains process sensory signals in natural environments is one of the key goals of twenty-first century neuroscience. While brain imaging and invasive electrophysiology will play key roles in this endeavor, there is also an important role to be played by noninvasive, macroscopic techniques with high temporal resolution such as electro- and magnetoencephalography. But challenges exist in determining how best to analyze such complex, time-varying neural responses to complex, time-varying and multivariate natural sensory stimuli. There has been a long history of applying system identification techniques to relate the firing activity of neurons to complex sensory stimuli and such techniques are now seeing increased application to EEG and MEG data. One particular example involves fitting a filter—often referred to as a temporal response function—that describes a mapping between some feature(s) of a sensory stimulus and the neural response. Here, we first briefly review the history of these system identification approaches and describe a specific technique for deriving temporal response functions known as regularized linear regression. We then introduce a new open-source toolbox for performing this analysis. We describe how it can be used to derive (multivariate) temporal response functions describing a mapping between stimulus and response in both directions. We also explain the importance of regularizing the analysis and how this regularization can be optimized for a particular dataset. We then outline specifically how the toolbox implements these analyses and provide several examples of the types of results that the toolbox can produce. Finally, we consider some of the limitations of the toolbox and opportunities for future development and application.
Collapse
Affiliation(s)
- Michael J Crosse
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College DublinDublin, Ireland; Department of Pediatrics and Department of Neuroscience, Albert Einstein College of MedicineThe Bronx, NY, USA
| | - Giovanni M Di Liberto
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College Dublin Dublin, Ireland
| | - Adam Bednar
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College DublinDublin, Ireland; Department of Biomedical Engineering and Department of Neuroscience, University of RochesterRochester, NY, USA
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College DublinDublin, Ireland; Department of Biomedical Engineering and Department of Neuroscience, University of RochesterRochester, NY, USA
| |
Collapse
|
34
|
Ince RAA, Jaworska K, Gross J, Panzeri S, van Rijsbergen NJ, Rousselet GA, Schyns PG. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres. Cereb Cortex 2016; 26:4123-4135. [PMID: 27550865 PMCID: PMC5066825 DOI: 10.1093/cercor/bhw196] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features.
Collapse
Affiliation(s)
- Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Katarzyna Jaworska
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Stefano Panzeri
- Laboratory of Neural Computation, Istituto Italiano di Tecnologia, Rovereto 38068, Italy
| | | | - Guillaume A Rousselet
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| |
Collapse
|
35
|
Inagaki M, Sasaki KS, Hashimoto H, Ohzawa I. Subspace mapping of the three-dimensional spectral receptive field of macaque MT neurons. J Neurophysiol 2016; 116:784-95. [PMID: 27193321 DOI: 10.1152/jn.00934.2015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Accepted: 05/18/2016] [Indexed: 11/22/2022] Open
Abstract
Neurons in the middle temporal (MT) visual area are thought to represent the velocity (direction and speed) of motion. Previous studies suggest the importance of both excitation and suppression for creating velocity representation in MT; however, details of the organization of excitation and suppression at the MT stage are not understood fully. In this article, we examine how excitatory and suppressive inputs are pooled in individual MT neurons by measuring their receptive fields in a three-dimensional (3-D) spatiotemporal frequency domain. We recorded the activity of single MT neurons from anesthetized macaque monkeys. To achieve both quality and resolution of the receptive field estimations, we applied a subspace reverse correlation technique in which a stimulus sequence of superimposed multiple drifting gratings was cross-correlated with the spiking activity of neurons. Excitatory responses tended to be organized in a manner representing a specific velocity independent of the spatial pattern of the stimuli. Conversely, suppressive responses tended to be distributed broadly over the 3-D frequency domain, supporting a hypothesis of response normalization. Despite the nonspecific distributed profile, the total summed strength of suppression was comparable to that of excitation in many MT neurons. Furthermore, suppressive responses reduced the bandwidth of velocity tuning, indicating that suppression improves the reliability of velocity representation. Our results suggest that both well-organized excitatory inputs and broad suppressive inputs contribute significantly to the invariant and reliable representation of velocity in MT.
Collapse
Affiliation(s)
- Mikio Inagaki
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan; Center for Information and Neural Networks, Suita, Osaka, Japan
| | - Kota S Sasaki
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan; Center for Information and Neural Networks, Suita, Osaka, Japan
| | - Hajime Hashimoto
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Izumi Ohzawa
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan; Center for Information and Neural Networks, Suita, Osaka, Japan
| |
Collapse
|
36
|
Nestor A, Plaut DC, Behrmann M. Feature-based face representations and image reconstruction from behavioral and neural data. Proc Natl Acad Sci U S A 2016; 113:416-21. [PMID: 26711997 DOI: 10.1073/pnas.1514551112] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The reconstruction of images from neural data can provide a unique window into the content of human perceptual representations. Although recent efforts have established the viability of this enterprise using functional magnetic resonance imaging (MRI) patterns, these efforts have relied on a variety of prespecified image features. Here, we take on the twofold task of deriving features directly from empirical data and of using these features for facial image reconstruction. First, we use a method akin to reverse correlation to derive visual features from functional MRI patterns elicited by a large set of homogeneous face exemplars. Then, we combine these features to reconstruct novel face images from the corresponding neural patterns. This approach allows us to estimate collections of features associated with different cortical areas as well as to successfully match image reconstructions to corresponding face exemplars. Furthermore, we establish the robustness and the utility of this approach by reconstructing images from patterns of behavioral data. From a theoretical perspective, the current results provide key insights into the nature of high-level visual representations, and from a practical perspective, these findings make possible a broad range of image-reconstruction applications via a straightforward methodological approach.
Collapse
|
37
|
Elijah DH, Samengo I, Montemurro MA. Thalamic neuron models encode stimulus information by burst-size modulation. Front Comput Neurosci 2015; 9:113. [PMID: 26441623 PMCID: PMC4585143 DOI: 10.3389/fncom.2015.00113] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Accepted: 08/28/2015] [Indexed: 11/13/2022] Open
Abstract
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Collapse
Affiliation(s)
- Daniel H Elijah
- Faculty of Life Sciences, The University of Manchester Manchester, UK
| | - Inés Samengo
- Statistical and Interdisciplinary Physics Group, Instituto Balseiro and Centro Atómico Bariloche San Carlos de Bariloche, Argentina
| | | |
Collapse
|
38
|
Piché M, Thomas S, Casanova C. Spatiotemporal profiles of receptive fields of neurons in the lateral posterior nucleus of the cat LP-pulvinar complex. J Neurophysiol 2015; 114:2390-403. [PMID: 26289469 DOI: 10.1152/jn.00649.2015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 08/16/2015] [Indexed: 11/22/2022] Open
Abstract
The pulvinar is the largest extrageniculate thalamic visual nucleus in mammals. It establishes reciprocal connections with virtually all visual cortexes and likely plays a role in transthalamic cortico-cortical communication. In cats, the lateral posterior nucleus (LP) of the LP-pulvinar complex can be subdivided in two subregions, the lateral (LPl) and medial (LPm) parts, which receive a predominant input from the striate cortex and the superior colliculus, respectively. Here, we revisit the receptive field structure of LPl and LPm cells in anesthetized cats by determining their first-order spatiotemporal profiles through reverse correlation analysis following sparse noise stimulation. Our data reveal the existence of previously unidentified receptive field profiles in the LP nucleus both in space and time domains. While some cells responded to only one stimulus polarity, the majority of neurons had receptive fields comprised of bright and dark responsive subfields. For these neurons, dark subfields' size was larger than that of bright subfields. A variety of receptive field spatial organization types were identified, ranging from totally overlapped to segregated bright and dark subfields. In the time domain, a large spectrum of activity overlap was found, from cells with temporally coinciding subfield activity to neurons with distinct, time-dissociated subfield peak activity windows. We also found LP neurons with space-time inseparable receptive fields and neurons with multiple activity periods. Finally, a substantial degree of homology was found between LPl and LPm first-order receptive field spatiotemporal profiles, suggesting a high integration of cortical and subcortical inputs within the LP-pulvinar complex.
Collapse
Affiliation(s)
- Marilyse Piché
- Visual Neuroscience Laboratory, School of Optometry, Université de Montréal, Montréal, Québec, Canada
| | - Sébastien Thomas
- Visual Neuroscience Laboratory, School of Optometry, Université de Montréal, Montréal, Québec, Canada
| | - Christian Casanova
- Visual Neuroscience Laboratory, School of Optometry, Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
39
|
Roy S, Sinha SR, de Ruyter van Steveninck R. Encoding of yaw in the presence of distractor motion: studies in a fly motion sensitive neuron. J Neurosci 2015; 35:6481-94. [PMID: 25904799 DOI: 10.1523/JNEUROSCI.4256-14.2015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Motion estimation is crucial for aerial animals such as the fly, which perform fast and complex maneuvers while flying through a 3-D environment. Motion-sensitive neurons in the lobula plate, a part of the visual brain, of the fly have been studied extensively for their specialized role in motion encoding. However, the visual stimuli used in such studies are typically highly simplified, often move in restricted ways, and do not represent the complexities of optic flow generated during actual flight. Here, we use combined rotations about different axes to study how H1, a wide-field motion-sensitive neuron, encodes preferred yaw motion in the presence of stimuli not aligned with its preferred direction. Our approach is an extension of "white noise" methods, providing a framework that is readily adaptable to quantitative studies into the coding of mixed dynamic stimuli in other systems. We find that the presence of a roll or pitch ("distractor") stimulus reduces information transmitted by H1 about yaw, with the amount of this reduction depending on the variance of the distractor. Spike generation is influenced by features of both yaw and the distractor, where the degree of influence is determined by their relative strengths. Certain distractor features may induce bidirectional responses, which are indicative of an imbalance between global excitation and inhibition resulting from complex optic flow. Further, the response is shaped by the dynamics of the combined stimulus. Our results provide intuition for plausible strategies involved in efficient coding of preferred motion from complex stimuli having multiple motion components.
Collapse
|
40
|
Jones PR, Moore DR, Amitay S. Development of auditory selective attention: why children struggle to hear in noisy environments. Dev Psychol 2015; 51:353-69. [PMID: 25706591 PMCID: PMC4337492 DOI: 10.1037/a0038570] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 11/11/2014] [Accepted: 11/17/2014] [Indexed: 11/29/2022]
Abstract
Children's hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4-11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were influenced by (i.e., gave weight to) each spectral region of the stimulus. Psychometric fits were also used to estimate levels of internal noise and bias. Levels of masking were found to decrease with age, becoming adult-like by 9-11 years. This change was explained by improvements in selective attention alone, with older listeners better able to ignore noise similar in frequency to the target. Consistent with this, age-related differences in masking were abolished when the noise was made more distant in frequency to the target. This work offers novel evidence that improvements in selective attention are critical for the normal development of auditory judgments.
Collapse
|
41
|
Klein M, Afonso B, Vonner AJ, Hernandez-Nunez L, Berck M, Tabone CJ, Kane EA, Pieribone VA, Nitabach MN, Cardona A, Zlatic M, Sprecher SG, Gershow M, Garrity PA, Samuel AD. Sensory determinants of behavioral dynamics in Drosophila thermotaxis. Proc Natl Acad Sci U S A 2015; 112:E220-9. [PMID: 25550513 DOI: 10.1073/pnas.1416212112] [Citation(s) in RCA: 95] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Complex animal behaviors are built from dynamical relationships between sensory inputs, neuronal activity, and motor outputs in patterns with strategic value. Connecting these patterns illuminates how nervous systems compute behavior. Here, we study Drosophila larva navigation up temperature gradients toward preferred temperatures (positive thermotaxis). By tracking the movements of animals responding to fixed spatial temperature gradients or random temperature fluctuations, we calculate the sensitivity and dynamics of the conversion of thermosensory inputs into motor responses. We discover three thermosensory neurons in each dorsal organ ganglion (DOG) that are required for positive thermotaxis. Random optogenetic stimulation of the DOG thermosensory neurons evokes behavioral patterns that mimic the response to temperature variations. In vivo calcium and voltage imaging reveals that the DOG thermosensory neurons exhibit activity patterns with sensitivity and dynamics matched to the behavioral response. Temporal processing of temperature variations carried out by the DOG thermosensory neurons emerges in distinct motor responses during thermotaxis.
Collapse
|
42
|
Meso AI, Chemla S. Perceptual fields reveal previously hidden dynamics of human visual motion sensitivity. J Neurophysiol 2014; 114:1360-3. [PMID: 25339713 DOI: 10.1152/jn.00698.2014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Accepted: 10/21/2014] [Indexed: 11/22/2022] Open
Abstract
Motion sensitivity is a fundamental property of human vision. Although its neural correlates are normally only directly accessible with neurophysiological approaches, Neri (Neri P. J Neurosci 34: 8449-8491, 2014) proposed psychophysical reverse correlation to derive perceptual fields, revealing previously unseen dynamics of human motion detection. In this Neuro Forum, these key findings are discussed, putting them into broader context and pointing out possible implications of spatial scale considerations on the interpretation of the findings and dynamic model proposed.
Collapse
Affiliation(s)
- Andrew Isaac Meso
- Institut de Neurosciences de la Timone, UMR 7289 Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille, France
| | - Sandrine Chemla
- Institut de Neurosciences de la Timone, UMR 7289 Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille, France
| |
Collapse
|
43
|
Abstract
In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.
Collapse
|
44
|
Pernet CR, Belin P, Jones A. Behavioral evidence of a dissociation between voice gender categorization and phoneme categorization using auditory morphed stimuli. Front Psychol 2014; 4:1018. [PMID: 24474943 PMCID: PMC3893619 DOI: 10.3389/fpsyg.2013.01018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Accepted: 12/23/2013] [Indexed: 11/29/2022] Open
Abstract
Both voice gender perception and speech perception rely on neuronal populations located in the peri-sylvian areas. However, whilst functional imaging studies suggest a left vs. right hemisphere and anterior vs. posterior dissociation between voice and speech categorization, psycholinguistic studies on talker variability suggest that these two processes share common mechanisms. In this study, we investigated the categorical perception of voice gender (male vs. female) and phonemes (/pa/ vs. /ta/) using the same stimulus continua generated by morphing. This allowed the investigation of behavioral differences while controlling acoustic characteristics, since the same stimuli were used in both tasks. Despite a higher acoustic dissimilarity between items during the phoneme categorization task (a male and female voice producing the same phonemes) than the gender task (the same person producing 2 phonemes), results showed that speech information is being processed much faster than voice information. In addition, f0 or timbre equalization did not affect RT, which disagrees with the classical psycholinguistic models in which voice information is stripped away or normalized to access phonetic content. Also, despite similar average response (percentages) and perceptual (d') curves, a reverse correlation analysis on acoustic features revealed that only the vowel formant frequencies distinguish stimuli in the gender task, whilst, as expected, the formant frequencies of the consonant distinguished stimuli in the phoneme task. The 2nd set of results thus also disagrees with models postulating that the same acoustic information is used for voice and speech. Altogether these results suggest that voice gender categorization and phoneme categorization are dissociated at an early stage on the basis of different enhanced acoustic features that are diagnostic to the task at hand.
Collapse
Affiliation(s)
- Cyril R. Pernet
- Brain Research Imaging Centre, SINAPSE Collaboration, University of EdinburghEdinburgh, UK
| | - Pascal Belin
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of GlasgowGlasgow, UK
| | - Anna Jones
- Brain Research Imaging Centre, SINAPSE Collaboration, University of EdinburghEdinburgh, UK
| |
Collapse
|
45
|
Abstract
To understand how different spatial frequencies contribute to the overall perceived contrast of complex, broadband photographic images, we adapted the classification image paradigm. Using natural images as stimuli, we randomly varied relative contrast amplitude at different spatial frequencies and had human subjects determine which images had higher contrast. Then, we determined how the random variations corresponded with the human judgments. We found that the overall contrast of an image is disproportionately determined by how much contrast is between 1 and 6 c/°, around the peak of the contrast sensitivity function (CSF). We then employed the basic components of contrast psychophysics modeling to show that the CSF alone is not enough to account for our results and that an increase in gain control strength toward low spatial frequencies is necessary. One important consequence of this is that contrast constancy, the apparent independence of suprathreshold perceived contrast and spatial frequency, will not hold during viewing of natural images. We also found that images with darker low-luminance regions tended to be judged as having higher overall contrast, which we interpret as the consequence of darker local backgrounds resulting in higher band-limited contrast response in the visual system.
Collapse
Affiliation(s)
- Andrew M Haun
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
46
|
Abstract
Little is known about how older persons determine if someone deserves their trust or not based on their facial appearance, a process referred to as “facial trustworthiness.”In the past few years, Todorov and colleagues have argued that, in young adults, trustworthiness judgments are an extension of emotional judgments, and therefore, that trust judgments are made based on a continuum between anger and happiness (Todorov, 2008; Engell et al., 2010). Evidence from the literature on emotion processing suggest that older adults tend to be less efficient than younger adults in the recognition of negative facial expressions (Calder et al., 2003; Firestone et al., 2007; Ruffman et al., 2008; Chaby and Narme, 2009). Based on Todorov';s theory and the fact that older adults seem to be less efficient than younger adults in identifying emotional expressions, one could expect that older individuals would have different representations of trustworthy faces and that they would use different cues than younger adults in order to make such judgments. We verified this hypothesis using a variation of Mangini and Biederman's (2004) reverse correlation method in order to test and compare classification images resulting from trustworthiness (in the context of money investment), from happiness, and from anger judgments in two groups of participants: young adults and older healthy adults. Our results show that for elderly participants, both happy and angry representations are correlated with trustworthiness judgments. However, in young adults, trustworthiness judgments are mainly correlated with happiness representations. These results suggest that young and older adults differ in their way of judging trustworthiness.
Collapse
Affiliation(s)
- Catherine Ethier-Majcher
- Centre de recherche en Neuropsychologie Expérimentale et Cognition Montréal, QC, Canada ; Centre de recherche de I'Institut universitaire de gériatrie de Montréal Montréal, QC, Canada ; Département de psychologie, Université de Montréal Montréal, QC, Canada
| | | | | |
Collapse
|
47
|
Nagai T, Ono Y, Tani Y, Koida K, Kitazaki M, Nakauchi S. Image regions contributing to perceptual translucency: A psychophysical reverse-correlation study. Iperception 2013; 4:407-28. [PMID: 24349699 PMCID: PMC3859557 DOI: 10.1068/i0576] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 07/26/2013] [Indexed: 11/21/2022] Open
Abstract
The spatial luminance relationship between shading patterns and specular highlight is suggested to be a cue for perceptual translucency (Motoyoshi, 2010). Although local image features are also important for translucency perception (Fleming & Bulthoff, 2005), they have rarely been investigated. Here, we aimed to extract spatial regions related to translucency perception from computer graphics (CG) images of objects using a psychophysical reverse-correlation method. From many trials in which the observer compared the perceptual translucency of two CG images, we obtained translucency-related patterns showing which image regions were related to perceptual translucency judgments. An analysis of the luminance statistics calculated within these image regions showed that (1) the global rms contrast within an entire CG image was not related to perceptual translucency and (2) the local mean luminance of specific image regions within the CG images correlated well with perceptual translucency. However, the image regions contributing to perceptual translucency differed greatly between observers. These results suggest that perceptual translucency does not rely on global luminance statistics such as global rms contrast, but rather depends on local image features within specific image regions. There may be some “hot spots” effective for perceptual translucency, although which of many hot spots are used in judging translucency may be observer dependent.
Collapse
Affiliation(s)
- Takehiro Nagai
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan; and Department of Informatics, Yamagata University, Yonezawa, Japan;e-mail:
| | - Yuki Ono
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan; e-mail:
| | - Yusuke Tani
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan; e-mail:
| | - Kowa Koida
- Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, Toyohashi, Japan; e-mail:
| | - Michiteru Kitazaki
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan; e-mail:
| | - Shigeki Nakauchi
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan; e-mail:
| |
Collapse
|
48
|
Imhoff R, Woelki J, Hanke S, Dotsch R. Warmth and competence in your face! Visual encoding of stereotype content. Front Psychol 2013; 4:386. [PMID: 23825468 PMCID: PMC3695562 DOI: 10.3389/fpsyg.2013.00386] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2013] [Accepted: 06/10/2013] [Indexed: 11/13/2022] Open
Abstract
Previous research suggests that stereotypes about a group's warmth bias our visual representation of group members. Based on the stereotype content model (SCM) the current research explored whether the second big dimension of social perception, competence, is also reflected in visual stereotypes. To test this, participants created typical faces for groups either high in warmth and low in competence (male nursery teachers) or vice versa (managers) in a reverse correlation image classification task, which allows for the visualization of stereotypes without any a priori assumptions about relevant dimensions. In support of the independent encoding of both SCM dimensions hypotheses-blind raters judged the resulting visualizations of nursery teachers as warmer but less competent than the resulting image for managers, even when statistically controlling for judgments on one dimension. People thus seem to use facial cues indicating both relevant dimensions to make sense of social groups in a parsimonious, non-verbal and spontaneous manner.
Collapse
Affiliation(s)
- Roland Imhoff
- Department Psychology, Social Psychology, Social Cognition, University of Cologne Cologne, Germany
| | | | | | | |
Collapse
|
49
|
Abstract
Visual crowding is the inability to identify visible features when they are surrounded by other structure in the peripheral field. Since natural environments are replete with structure and most of our visual field is peripheral, crowding represents the primary limit on vision in the real world. However, little is known about the characteristics of crowding under natural conditions. Here we examine where crowding occurs in natural images. Observers were required to identify which of four locations contained a patch of "dead leaves'' (synthetic, naturalistic contour structure) embedded into natural images. Threshold size for the dead leaves patch scaled with eccentricity in a manner consistent with crowding. Reverse correlation at multiple scales was used to determine local image statistics that correlated with task performance. Stepwise model selection revealed that local RMS contrast and edge density at the site of the dead leaves patch were of primary importance in predicting the occurrence of crowding once patch size and eccentricity had been considered. The absolute magnitudes of the regression weights for RMS contrast at different spatial scales varied in a manner consistent with receptive field sizes measured in striate cortex of primate brains. Our results are consistent with crowding models that are based on spatial averaging of features in the early stages of the visual system, and allow the prediction of where crowding is likely to occur in natural images.
Collapse
Affiliation(s)
- Thomas S. A. Wallis
- Schepens Eye Research Institute, Massachusetts Eye and Ear Infirmary, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
- School of Psychology, The University of Western Australia, Perth, Australia
| | - Peter J. Bex
- Schepens Eye Research Institute, Massachusetts Eye and Ear Infirmary, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
50
|
Nestor A, Vettel JM, Tarr MJ. Internal representations for face detection: an application of noise-based image classification to BOLD responses. Hum Brain Mapp 2012; 34:3101-15. [PMID: 22711230 DOI: 10.1002/hbm.22128] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2011] [Revised: 04/22/2012] [Accepted: 04/23/2012] [Indexed: 11/10/2022] Open
Abstract
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.
Collapse
Affiliation(s)
- Adrian Nestor
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, Pennsylvania; Department of Psychology, Carnegie Mellon University, Pittsburgh, Pennsylvania
| | | | | |
Collapse
|