1
|
Balsdon T, Philiastides MG. Confidence control for efficient behaviour in dynamic environments. Nat Commun 2024; 15:9089. [PMID: 39433579 PMCID: PMC11493976 DOI: 10.1038/s41467-024-53312-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 10/07/2024] [Indexed: 10/23/2024] Open
Abstract
Signatures of confidence emerge during decision-making, implying confidence may be of functional importance to decision processes themselves. We formulate an extension of sequential sampling models of decision-making in which confidence is used online to actively moderate the quality and quantity of evidence accumulated for decisions. The benefit of this model is that it can respond to dynamic changes in sensory evidence quality. We highlight this feature by designing a dynamic sensory environment where evidence quality can be smoothly adapted within the timeframe of a single decision. Our model with confidence control offers a superior description of human behaviour in this environment, compared to sequential sampling models without confidence control. Using multivariate decoding of electroencephalography (EEG), we uncover EEG correlates of the model's latent processes, and show stronger EEG-derived confidence control is associated with faster, more accurate decisions. These results support a neurobiologically plausible framework featuring confidence as an active control mechanism for improving behavioural efficiency.
Collapse
Affiliation(s)
- Tarryn Balsdon
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom.
- Laboratory of Perceptual Systems, DEC, ENS, PSL University, CNRS (UMR 8248), Paris, France.
| | - Marios G Philiastides
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
2
|
Donato R, Contillo A, Campana G, Roccato M, Gonçalves ÓF, Pavan A. Visual Perceptual Learning of Form-Motion Integration: Exploring the Involved Mechanisms with Transfer Effects and the Equivalent Noise Approach. Brain Sci 2024; 14:997. [PMID: 39452011 PMCID: PMC11506814 DOI: 10.3390/brainsci14100997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Revised: 09/25/2024] [Accepted: 09/26/2024] [Indexed: 10/26/2024] Open
Abstract
Background: Visual perceptual learning plays a crucial role in shaping our understanding of how the human brain integrates visual cues to construct coherent perceptual experiences. The visual system is continually challenged to integrate a multitude of visual cues, including form and motion, to create a unified representation of the surrounding visual scene. This process involves both the processing of local signals and their integration into a coherent global percept. Over the past several decades, researchers have explored the mechanisms underlying this integration, focusing on concepts such as internal noise and sampling efficiency, which pertain to local and global processing, respectively. Objectives and Methods: In this study, we investigated the influence of visual perceptual learning on non-directional motion processing using dynamic Glass patterns (GPs) and modified Random-Dot Kinematograms (mRDKs). We also explored the mechanisms of learning transfer to different stimuli and tasks. Specifically, we aimed to assess whether visual perceptual learning based on illusory directional motion, triggered by form and motion cues (dynamic GPs), transfers to stimuli that elicit comparable illusory motion, such as mRDKs. Additionally, we examined whether training on form and motion coherence thresholds improves internal noise filtering and sampling efficiency. Results: Our results revealed significant learning effects on the trained task, enhancing the perception of dynamic GPs. Furthermore, there was a substantial learning transfer to the non-trained stimulus (mRDKs) and partial transfer to a different task. The data also showed differences in coherence thresholds between dynamic GPs and mRDKs, with GPs showing lower coherence thresholds than mRDKs. Finally, an interaction between visual stimulus type and session for sampling efficiency revealed that the effect of training session on participants' performance varied depending on the type of visual stimulus, with dynamic GPs being influenced differently than mRDKs. Conclusion: These findings highlight the complexity of perceptual learning and suggest that the transfer of learning effects may be influenced by the specific characteristics of both the training stimuli and tasks, providing valuable insights for future research in visual processing.
Collapse
Affiliation(s)
- Rita Donato
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (R.D.); (G.C.); (M.R.)
| | | | - Gianluca Campana
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (R.D.); (G.C.); (M.R.)
- Human Inspired Technology Research Centre, University of Padova, Via Luzzati 4, 35121 Padova, Italy
| | - Marco Roccato
- Department of General Psychology, University of Padova, Via Venezia 8, 35131 Padova, Italy; (R.D.); (G.C.); (M.R.)
| | - Óscar F. Gonçalves
- Brainloop Laboratory, CINTESIS@RISE, CINTESIS.UPT, Universidade Portucalense Infante D. Henrique, 4200-072 Porto, Portugal;
| | - Andrea Pavan
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40127 Bologna, Italy
| |
Collapse
|
3
|
Silva AE, Harding JE, Chakraborty A, Dai DW, Gamble GD, McKinlay CJD, Nivins S, Shah R, Thompson B. Associations Between Autism Spectrum Quotient and Integration of Visual Stimuli in 9-year-old Children: Preliminary Evidence of Sex Differences. J Autism Dev Disord 2024; 54:2987-2997. [PMID: 37344731 DOI: 10.1007/s10803-023-06035-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/30/2023] [Indexed: 06/23/2023]
Abstract
PURPOSE The dorsal stream vulnerability hypothesis posits that the dorsal stream, responsible for visual motion and visuo-motor processing, may be particularly vulnerable during neurodevelopment. Consistent with this, autism spectrum disorder (ASD) has been associated with deficits in global motion integration, though deficits in ventral stream tasks, such as form identification, have also been reported. In the current study, we examined whether a similar pattern of results is found in a cohort of 381 children born with neurodevelopmental risk factors and exhibiting a wide spectrum of caregiver-reported autistic traits. METHODS We examined the associations between global motion perception, global form perception, fine motor function, visual-motor integration, and autistic traits (autism spectrum quotient, AQ) using linear regression, accounting for possible interactions with sex and other factors relevant to neurodevelopment. RESULTS All assessments of dorsal stream function were significantly associated with AQ such that worse performance predicted higher AQ scores. We also observed a significant sex interaction, with worse global form perception associated with higher AQ in boys (n = 202) but not girls (n = 179). CONCLUSION We found widespread associations between dorsal stream functions and autistic traits. These associations were observed in a large group of children with a range of AQ scores, demonstrating a range of visual function across the full spectrum of autistic traits. In addition, ventral function was associated with AQ in boys but not girls. Sex differences in the associations between visual processing and neurodevelopment should be considered in the designs of future studies.
Collapse
Affiliation(s)
- Andrew E Silva
- School of Optometry and Vision Science, University of Waterloo, Waterloo, ON, Canada.
| | - Jane E Harding
- Liggins Institute, The University of Auckland, Auckland, New Zealand
| | - Arijit Chakraborty
- Chicago College of Optometry, Midwestern University, Downers Grove, IL, USA
| | - Darren W Dai
- Liggins Institute, The University of Auckland, Auckland, New Zealand
| | - Greg D Gamble
- Liggins Institute, The University of Auckland, Auckland, New Zealand
| | - Christopher J D McKinlay
- Liggins Institute, The University of Auckland, Auckland, New Zealand
- Kidz First Neonatal Care, Auckland, New Zealand
| | - Samson Nivins
- Liggins Institute, The University of Auckland, Auckland, New Zealand
| | - Rajesh Shah
- Liggins Institute, The University of Auckland, Auckland, New Zealand
| | - Benjamin Thompson
- School of Optometry and Vision Science, University of Waterloo, Waterloo, ON, Canada
- Liggins Institute, The University of Auckland, Auckland, New Zealand
- Centre for Eye and Vision Research Limited, 17W Science Park, Shatin, Hong Kong
| |
Collapse
|
4
|
Pruitt J, Knotts JD, Odegaard B. Consistent metacognitive efficiency and variable response biases in peripheral vision. J Vis 2024; 24:4. [PMID: 39110584 PMCID: PMC11314628 DOI: 10.1167/jov.24.8.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 06/17/2024] [Indexed: 08/11/2024] Open
Abstract
Across the visual periphery, perceptual and metacognitive abilities differ depending on the locus of visual attention, the location of peripheral stimulus presentation, the task design, and many other factors. In this investigation, we aimed to illuminate the relationship between attention and eccentricity in the visual periphery by estimating perceptual sensitivity, metacognitive sensitivity, and response biases across the visual field. In a 2AFC detection task, participants were asked to determine whether a signal was present or absent at one of eight peripheral locations (±10°, 20°, 30°, and 40°), using either a valid or invalid attentional cue. As expected, results revealed that perceptual sensitivity declined with eccentricity and was modulated by attention, with higher sensitivity on validly cued trials. Furthermore, a significant main effect of eccentricity on response bias emerged, with variable (but relatively unbiased) c'a values from 10° to 30°, and conservative c'a values at 40°. Regarding metacognitive sensitivity, significant main effects of attention and eccentricity were found, with metacognitive sensitivity decreasing with eccentricity, and decreasing in the invalid cue condition. Interestingly, metacognitive efficiency, as measured by the ratio of meta-d'a/d'a, was not modulated by attention or eccentricity. Overall, these findings demonstrate (1) that in some circumstances, observers have surprisingly robust metacognitive insights into how performance changes across the visual field and (2) that the periphery may be subject to variable detection biases that are contingent on the exact location in peripheral space.
Collapse
Affiliation(s)
- Joseph Pruitt
- University of Florida, Gainesville, FL, USA
- https://orcid.org/0000-0002-4887-6090
| | | | - Brian Odegaard
- University of Florida, Gainesville, FL, USA
- https://orcid.org/0000-0002-5459-1884
| |
Collapse
|
5
|
Neuenswander KL, Goodale BM, Bryant GA, Johnson KL. Sex ratios in vocal ensembles affect perceptions of threat and belonging. Sci Rep 2024; 14:14575. [PMID: 38914752 PMCID: PMC11196271 DOI: 10.1038/s41598-024-65535-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/20/2024] [Indexed: 06/26/2024] Open
Abstract
People often interact with groups (i.e., ensembles) during social interactions. Given that group-level information is important in navigating social environments, we expect perceptual sensitivity to aspects of groups that are relevant for personal threat as well as social belonging. Most ensemble perception research has focused on visual ensembles, with little research looking at auditory or vocal ensembles. Across four studies, we present evidence that (i) perceivers accurately extract the sex composition of a group from voices alone, (ii) judgments of threat increase concomitantly with the number of men, and (iii) listeners' sense of belonging depends on the number of same-sex others in the group. This work advances our understanding of social cognition, interpersonal communication, and ensemble coding to include auditory information, and reveals people's ability to extract relevant social information from brief exposures to vocalizing groups.
Collapse
Affiliation(s)
- Kelsey L Neuenswander
- Department of Communication, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA, 90095, USA.
| | | | - Gregory A Bryant
- Department of Communication, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA, 90095, USA
| | - Kerri L Johnson
- Department of Communication, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA, 90095, USA
- Department of Psychology, University of California, Los Angeles, USA
| |
Collapse
|
6
|
Attarha M, Mahncke H, Merzenich M. The Real-World Usability, Feasibility, and Performance Distributions of Deploying a Digital Toolbox of Computerized Assessments to Remotely Evaluate Brain Health: Development and Usability Study. JMIR Form Res 2024; 8:e53623. [PMID: 38739916 PMCID: PMC11130778 DOI: 10.2196/53623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 03/15/2024] [Accepted: 04/11/2024] [Indexed: 05/16/2024] Open
Abstract
BACKGROUND An ongoing global challenge is managing brain health and understanding how performance changes across the lifespan. OBJECTIVE We developed and deployed a set of self-administrable, computerized assessments designed to measure key indexes of brain health across the visual and auditory sensory modalities. In this pilot study, we evaluated the usability, feasibility, and performance distributions of the assessments in a home-based, real-world setting without supervision. METHODS Potential participants were untrained users who self-registered on an existing brain training app called BrainHQ. Participants were contacted via a recruitment email and registered remotely to complete a demographics questionnaire and 29 unique assessments on their personal devices. We examined participant engagement, descriptive and psychometric properties of the assessments, associations between performance and self-reported demographic variables, cognitive profiles, and factor loadings. RESULTS Of the 365,782 potential participants contacted via a recruitment email, 414 (0.11%) registered, of whom 367 (88.6%) completed at least one assessment and 104 (25.1%) completed all 29 assessments. Registered participants were, on average, aged 63.6 (SD 14.8; range 13-107) years, mostly female (265/414, 64%), educated (329/414, 79.5% with a degree), and White (349/414, 84.3% White and 48/414, 11.6% people of color). A total of 72% (21/29) of the assessments showed no ceiling or floor effects or had easily modifiable score bounds to eliminate these effects. When correlating performance with self-reported demographic variables, 72% (21/29) of the assessments were sensitive to age, 72% (21/29) of the assessments were insensitive to gender, 93% (27/29) of the assessments were insensitive to race and ethnicity, and 93% (27/29) of the assessments were insensitive to education-based differences. Assessments were brief, with a mean duration of 3 (SD 1.0) minutes per task. The pattern of performance across the assessments revealed distinctive cognitive profiles and loaded onto 4 independent factors. CONCLUSIONS The assessments were both usable and feasible and warrant a full normative study. A digital toolbox of scalable and self-administrable assessments that can evaluate brain health at a glance (and longitudinally) may lead to novel future applications across clinical trials, diagnostics, and performance optimization.
Collapse
|
7
|
White PA. The perceptual timescape: Perceptual history on the sub-second scale. Cogn Psychol 2024; 149:101643. [PMID: 38452720 DOI: 10.1016/j.cogpsych.2024.101643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/09/2024]
Abstract
There is a high-capacity store of brief time span (∼1000 ms) which information enters from perceptual processing, often called iconic memory or sensory memory. It is proposed that a main function of this store is to hold recent perceptual information in a temporally segregated representation, named the perceptual timescape. The perceptual timescape is a continually active representation of change and continuity over time that endows the perceived present with a perceived history. This is accomplished primarily by two kinds of time marking information: time distance information, which marks all items of information in the perceptual timescape according to how far in the past they occurred, and ordinal temporal information, which organises items of information in terms of their temporal order. Added to that is information about connectivity of perceptual objects over time. These kinds of information connect individual items over a brief span of time so as to represent change, persistence, and continuity over time. It is argued that there is a one-way street of information flow from perceptual processing either to the perceived present or directly into the perceptual timescape, and thence to working memory. Consistent with that, the information structure of the perceptual timescape supports postdictive reinterpretations of recent perceptual information. Temporal integration on a time scale of hundreds of milliseconds takes place in perceptual processing and does not draw on information in the perceptual timescape, which is concerned with temporal segregation, not integration.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, Wales CF10 3YG, United Kingdom.
| |
Collapse
|
8
|
Rubinstein JF, Singh M, Kowler E. Bayesian approaches to smooth pursuit of random dot kinematograms: effects of varying RDK noise and the predictability of RDK direction. J Neurophysiol 2024; 131:394-416. [PMID: 38149327 DOI: 10.1152/jn.00116.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 12/28/2023] Open
Abstract
Smooth pursuit eye movements respond on the basis of both immediate and anticipated target motion, where anticipations may be derived from either memory or perceptual cues. To study the combined influence of both immediate sensory motion and anticipation, subjects pursued clear or noisy random dot kinematograms (RDKs) whose mean directions were chosen from Gaussian distributions with SDs = 10° (narrow prior) or 45° (wide prior). Pursuit directions were consistent with Bayesian theory in that transitions over time from dependence on the prior to near total dependence on immediate sensory motion (likelihood) took longer with the noisier RDKs and with the narrower, more reliable, prior. Results were fit to Bayesian models in which parameters representing the variability of the likelihood either were or were not constrained to be the same for both priors. The unconstrained model provided a statistically better fit, with the influence of the prior in the constrained model smaller than predicted from strict reliability-based weighting of prior and likelihood. Factors that may have contributed to this outcome include prior variability different from nominal values, low-level sensorimotor learning with the narrow prior, or departures of pursuit from strict adherence to reliability-based weighting. Although modifications of, or alternatives to, the normative Bayesian model will be required, these results, along with previous studies, suggest that Bayesian approaches are a promising framework to understand how pursuit combines immediate sensory motion, past history, and informative perceptual cues to accurately track the target motion that is most likely to occur in the immediate future.NEW & NOTEWORTHY Smooth pursuit eye movements respond on the basis of anticipated, as well as immediate, target motions. Bayesian models using reliability-based weighting of previous (prior) and immediate target motions (likelihood) accounted for many, but not all, aspects of pursuit of clear and noisy random dot kinematograms with different levels of predictability. Bayesian approaches may solve the long-standing problem of how pursuit combines immediate sensory motion and anticipation of future motion to configure an effective response.
Collapse
Affiliation(s)
- Jason F Rubinstein
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Manish Singh
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| | - Eileen Kowler
- Department of Psychology, Rutgers University, Piscataway, New Jersey, United States
| |
Collapse
|
9
|
Azarov D, Grigorev D, Utochkin I. A signal-detection account of item-based and ensemble-based visual change detection: A reply to Harrison, McMaster, and Bays. J Vis 2024; 24:10. [PMID: 38407901 PMCID: PMC10902873 DOI: 10.1167/jov.24.2.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 12/27/2023] [Indexed: 02/27/2024] Open
Abstract
Growing empirical evidence shows that ensemble information (e.g., the average feature or feature variance of a set of objects) affects visual working memory for individual items. Recently, Harrison, McMaster, and Bays (2021) used a change detection task to test whether observers explicitly rely on ensemble representations to improve their memory for individual objects. They found that sensitivity to simultaneous changes in all memorized items (which also globally changed set summary statistics) rarely exceeded a level predicted by the so-called optimal summation model within the signal-detection framework. This model implies simple integration of evidence for change from all individual items and no additional evidence coming from ensemble. Here, we argue that performance at the level of optimal summation does not rule out the use of ensemble information. First, in two experiments, we show that, even if evidence from only one item is available at test, the statistics of the whole memory set affect performance. Second, we argue that optimal summation itself can be conceptually interpreted as one of the strategies of holistic, ensemble-based decision. We also redefine the reference level for the item-based strategy as the so-called "minimum rule," which predicts performance far below the optimum. We found that that both our and Harrison et al. (2021)'s observers consistently outperformed this level. We conclude that observers can rely on ensemble information when performing visual change detection. Overall, our work clarifies and refines the use of signal-detection analysis in measuring and modeling working memory.
Collapse
|
10
|
McKee SP. Envisioning a Woman Scientist. Annu Rev Vis Sci 2023; 9:1-14. [PMID: 36930944 DOI: 10.1146/annurev-vision-111022-123844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
Abstract
I entered science at a particularly lucky time. By the mid-1960s, women were being encouraged to pursue serious scientific careers. During the 60-year span of my career, women have become equal partners with men in scientific research, particularly in the biological sciences. There also has been abundant funding for research, which allowed me to succeed in a "soft-money" position at Smith-Kettlewell Eye Research Institute, a place that was especially supportive for a woman scientist with children. In this article, I describe the findings that I think represent the most interesting and enduring scientific work from my career.
Collapse
Affiliation(s)
- Suzanne P McKee
- Smith-Kettlewell Eye Research Institute, San Francisco, California, USA;
| |
Collapse
|
11
|
Li B, Xiao L, Yu Q, Huang X. Neural correlates of aftereffects induced by adaptations to single and average durations. Psych J 2023; 12:479-490. [PMID: 36916767 DOI: 10.1002/pchj.640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Accepted: 01/13/2023] [Indexed: 03/15/2023]
Abstract
Duration perception can be heavily distorted owing to repetitive exposure to a relatively long or short sensory event, often causing a duration aftereffect. Here, we used a novel procedure to show that adaptations to both single and average durations produced the duration aftereffect. Participants completed a duration reproduction task (Experiment 1) or a duration category rating task (Experiment 2) after long-term adaptations to a stimulus of medium duration and to stimuli of averagely medium duration. We found that adaptations to both single and average durations resulted in duration aftereffects. The simultaneously recorded functional magnetic resonance imaging (fMRI) data revealed that the reduction in neural activity due to long-term adaptation to single duration was observed in the right supramarginal gyrus (SMG) of the parietal lobe, while adaptation to average duration resulted in fMRI adaptations in the left postcentral gyrus (PCG) and middle cingulate gyrus (MCG). At the individual level, the magnitude of the behavioral aftereffect was positively correlated with the magnitude of fMRI adaptation in the right SMG after adaptation to single duration, while there were no significantly positive correlations between the behavioral aftereffect and fMRI adaptations in the left PCG and MCG. These results suggest that there are different neural mechanisms for aftereffects caused by adaptations to single and average durations.
Collapse
Affiliation(s)
- Baolin Li
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Lijuan Xiao
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China
| | - Qinlin Yu
- School of Life Sciences, Peking University, Beijing, China
| | - Xiting Huang
- Faculty of Psychology, Southwest University, Chongqing, China
| |
Collapse
|
12
|
Iakovlev AU, Utochkin IS. Ensemble averaging: What can we learn from skewed feature distributions? J Vis 2023; 23:5. [PMID: 36602815 PMCID: PMC9832727 DOI: 10.1167/jov.23.1.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 11/23/2022] [Indexed: 01/06/2023] Open
Abstract
Many studies have shown that observers can accurately estimate the average feature of a group of objects. However, the way the visual system relies on the information from each individual item is still under debate. Some models suggest some or all items sampled and averaged arithmetically. Another strategy implies "robust averaging," when middle elements gain greater weight than outliers. One version of a robust averaging model was recently suggested by Teng et al. (2021), who studied motion direction averaging in skewed feature distributions and found systematic biases toward their modes. They interpreted these biases as evidence for robust averaging and suggested a probabilistic weighting model based on minimization of the virtual loss function. In four experiments, we replicated systematic skew-related biases in another feature domain, namely, orientation averaging. Importantly, we show that the magnitude of the bias is not determined by the locations of the mean or mode alone, but is substantially defined by the shape of the whole feature distribution. We test a model that accounts for such distribution-dependent biases and robust averaging in a biologically plausible way. The model is based on well-established mechanisms of spatial pooling and population encoding of local features by neurons with large receptive fields. Both the loss functions model and the population coding model with a winner-take-all decoding rule accurately predicted the observed patterns, suggesting that the pooled population response model can be considered a neural implementation of the computational algorithms of information sampling and robust averaging in ensemble perception.
Collapse
Affiliation(s)
| | - Igor S Utochkin
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
13
|
A comparison of equivalent noise methods in investigating local and global form and motion integration. Atten Percept Psychophys 2023; 85:152-165. [PMID: 36380147 DOI: 10.3758/s13414-022-02595-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/08/2022] [Indexed: 11/16/2022]
Abstract
Static and dynamic cues within certain spatiotemporal proximity are used to evoke respective global percepts of form and motion. The limiting factors in this process are, first, internal noise, which indexes local orientation/direction detection, and, second, sampling efficiency, which relates to the processing and the representation of global orientation/direction. These parameters are quantified using the equivalent noise (EN) paradigm. EN has been implemented with just two levels: high and low noise. However, when using this simplified version, one must assume the shape of the overall noise dependence, as the intermediate points are missing. Here, we investigated whether two distinct EN methods, the 8-point and the simplified 2-point version, reveal comparable parameter estimates. This was performed for three different types of stimuli: random dot kinematograms, and static and dynamic translational Glass patterns, to investigate how constant internal noise estimates are, and how sampling efficiency might vary over tasks. The results indicated substantial compatibility between estimates over a wide range of external noise levels sampled with eight data points, and a simplified version producing two highly informative data points. Our findings support the use of a simplified procedure to estimate essential form-motion integration parameters, paving the way for rapid and critical applications to populations that cannot tolerate protracted measurements.
Collapse
|
14
|
Foveal vision determines the perceived emotion of face ensembles. Atten Percept Psychophys 2023; 85:209-221. [PMID: 36369614 DOI: 10.3758/s13414-022-02614-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
People can extract summary statistical information from groups of similar objects, an ability called ensemble perception. However, not every object in a group is weighted equally. For example, in ensemble emotion perception, faces far from fixation were weighted less than faces close to fixation. Yet the contribution of foveal input in ensemble emotion perception is still unclear. In two experiments, groups of faces with varying emotions were presented for 100 ms at three different eccentricities (0°, 3°, 8°). Observers reported the perceived average emotion of the group. In two conditions, stimuli consisted of a central face flanked by eight faces (flankers) (central-present condition) and eight faces without the central face (central-absent condition). In the central-present condition, the emotion of the central face was either congruent or incongruent with that of the flankers. In Experiment 1, flanker emotions were uniform (identical flankers); in Experiment 2 they were varied. In both experiments, performance in the central-present condition was superior at 3° compared to 0° and 8°. At 0°, performance was superior in the central-absent (i.e., no foveal input) compared to the central-present condition. Poor performance in the central-present condition was driven by the incongruent condition where the foveal face strongly biased responses. At 3° and 8°, performance was comparable between central-present and central-absent conditions. Our results showed how foveal input determined the perceived emotion of face ensembles, suggesting that ensemble perception fails when salient target information is available in central vision.
Collapse
|
15
|
Töpfer FM, Barbieri R, Sexton CM, Wang X, Soch J, Bogler C, Haynes JD. Psychophysics and computational modeling of feature-continuous motion perception. J Vis 2022; 22:16. [PMID: 36306146 PMCID: PMC9624271 DOI: 10.1167/jov.22.11.16] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 08/14/2022] [Indexed: 11/24/2022] Open
Abstract
Sensory decision-making is frequently studied using categorical tasks, even though the feature space of most stimuli is continuous. Recently, it has become more common to measure feature perception in a gradual fashion, say when studying motion perception across the full space of directions. However, continuous reports can be contaminated by perceptual or motor biases. Here, we examined such biases on perceptual reports by comparing two response methods. With the first method, participants reported motion direction in a motor reference frame by moving a trackball. With the second method, participants used a perceptual frame of reference with a perceptual comparison stimulus. We tested biases using three different versions of random dot kinematograms. We found strong and systematic biases in responses when reporting the direction in a motor frame of reference. For the perceptual frame of reference, these systematic biases were not evident. Independent of the response method, we also detected a systematic misperception where subjects sometimes confuse the physical stimulus direction with its opposite direction. This was confirmed using a von Mises mixture model that estimated the contribution of veridical perception, misperception, and guessing. Importantly, the more sensitive perceptual reporting method revealed that, with increasing levels of sensory evidence, perceptual performance increases not only in the form of higher detection probability, but under certain conditions also in the form of increased precision.
Collapse
Affiliation(s)
- Felix M Töpfer
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Bernstein Center for Computational Neuroscience, Berlin Center for Advanced Neuroimaging, and Department of Neurology, Berlin, Germany
| | - Riccardo Barbieri
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Bernstein Center for Computational Neuroscience, Berlin Center for Advanced Neuroimaging, and Department of Neurology, Berlin, Germany
| | - Charlie M Sexton
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Bernstein Center for Computational Neuroscience, Berlin Center for Advanced Neuroimaging, and Department of Neurology, Berlin, Germany
- Melbourne School of Psychological Sciences, The University of Melbourne Parkville, Melbourne, Australia
| | - Xinhao Wang
- Humboldt-Universität zu Berlin, Berlin School of Mind and Brain and Institute of Psychology, Berlin, Germany
| | - Joram Soch
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Bernstein Center for Computational Neuroscience, Berlin Center for Advanced Neuroimaging, and Department of Neurology, Berlin, Germany
- Research Group Cognitive Geriatric Psychiatry, German Center for Neurodegenerative Diseases, Göttingen, Germany
| | - Carsten Bogler
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Bernstein Center for Computational Neuroscience, Berlin Center for Advanced Neuroimaging, and Department of Neurology, Berlin, Germany
| | - John-Dylan Haynes
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Bernstein Center for Computational Neuroscience, Berlin Center for Advanced Neuroimaging, and Department of Neurology, Berlin, Germany
- Humboldt-Universität zu Berlin, Berlin School of Mind and Brain and Institute of Psychology, Berlin, Germany
- Technische Universität Dresden, SFB 940 and Cognitive Control, 01069 Dresden, Germany
| |
Collapse
|
16
|
Arslanova I, Takamuku S, Gomi H, Haggard P. Multi-digit tactile perception I: motion integration benefits for tactile trajectories presented bimanually. J Neurophysiol 2022; 128:418-433. [PMID: 35822710 PMCID: PMC9359661 DOI: 10.1152/jn.00022.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Interactions with objects involve simultaneous contact with multiple, not necessarily adjacent, skin regions. While advances have been made in understanding the capacity to selectively attend to a single tactile element among distracting stimulations, here, we examine how multiple stimulus elements are explicitly integrated into an overall tactile percept. Across four experiments, participants averaged the direction of two simultaneous tactile motion trajectories of varying discrepancy delivered to different fingerpads. Averaging performance differed between within- and between-hands conditions in terms of sensitivity and precision but was unaffected by somatotopic proximity between stimulated fingers. First, precision was greater in between-hand compared to within-hand conditions, demonstrating a bimanual perceptual advantage in multi-touch integration. Second, sensitivity to the average direction was influenced by the discrepancy between individual motion signals, but only for within-hand conditions. Overall, our experiments identify key factors that influence perception of simultaneous tactile events. In particular, we show that multi-touch integration is constrained by hand-specific rather than digit-specific mechanisms.
Collapse
Affiliation(s)
- Irena Arslanova
- Institute of Cognitive Neuroscience, grid.83440.3bUniversity College London, London, United Kingdom
| | - Shinya Takamuku
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, grid.83440.3bUniversity College London, London, United Kingdom
| |
Collapse
|
17
|
Chen J, Gegenfurtner KR. Electrophysiological evidence for higher-level chromatic mechanisms in humans. J Vis 2021; 21:12. [PMID: 34357373 PMCID: PMC8354086 DOI: 10.1167/jov.21.8.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 07/13/2021] [Indexed: 11/24/2022] Open
Abstract
Color vision in humans starts with three types of cones (short [S], medium [M], and long [L] wavelengths) in the retina and three retinal and subcortical cardinal mechanisms, which linearly combine cone signals into the luminance channel (L + M), the red-green channel (L - M), and the yellow-blue channel (S-(L + M)). Chromatic mechanisms at the cortical level, however, are less well characterized. The present study investigated such higher-order chromatic mechanisms by recording electroencephalograms (EEGs) on human observers in a noise masking paradigm. Observers viewed colored stimuli that consisted of a target embedded in noise. Color directions of the target and noise varied independently and systematically in an isoluminant plane of color space. The target was flickering on-off at 3 Hz, eliciting steady-state visual evoked potential (SSVEP) responses. As a result, the masking strength could be estimated from the SSVEP amplitude in the presence of 6 Hz noise. Masking was strongest (i.e. target eliciting smallest SSVEPs) when the target and noise were along the same color direction, and was weakest (i.e. target eliciting highest SSVEPs) when the target and noise were along orthogonal directions. This pattern of results was observed both when the target color varied along the cardinal and intermediate directions, which is evidence for higher-order chromatic mechanisms tuned to intermediate axes. The SSVEP result can be well predicted by a model with multiple broadly tuned chromatic mechanisms. In contrast, a model with only cardinal mechanisms failed to account for the data. These results provide strong electrophysiological evidence for multiple chromatic mechanisms in the early visual cortex of humans.
Collapse
Affiliation(s)
- Jing Chen
- School of Psychology, Shanghai University of Sport, Shanghai, China
- https://orcid.org/0000-0002-3038-1786
| | - Karl R Gegenfurtner
- Abteilung Allgemeine Psychologie and Center for Mind, Brain & Behavior, Justus-Liebig-Universität Gießen, Gießen, Germany
- https://www.allpsych.uni-giessen.de/karl/
| |
Collapse
|
18
|
Teng T, Li S, Zhang H. The virtual loss function in the summary perception of motion and its limited adjustability. J Vis 2021; 21:2. [PMID: 33944907 PMCID: PMC8107510 DOI: 10.1167/jov.21.5.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans can grasp the "average" feature of a visual ensemble quickly and effortlessly. However, it is largely unknown what is the exact form of the summary statistic humans perceive and it is even less known whether this form can be changed by feedback. Here we borrow the concept of loss function to characterize how the summary perception is related to the distribution of feature values in the ensemble, assuming that the summary statistic minimizes a virtual expected loss associated with its deviation from individual feature values. In two experiments, we investigated a random-dot motion estimation task to infer the virtual loss function implicit in ensemble perception and see whether it can be changed by feedback. On each trial, participants reported the average moving direction of an ensemble of moving dots whose distribution of moving directions was skewed. In Experiment 1, where no feedback was available, participants' estimates fell between the mean and the mode of the distribution and were closer to the mean. In particular, the deviation from the mean and toward the mode increased almost linearly with the mode-to-mean distance. The pattern was best modeled by an inverse Gaussian loss function, which punishes large errors less heavily than the quadratic loss function does. In Experiment 2, we tested whether this virtual loss function can be altered by feedback. Two groups of participants either received the mode or the mean as the correct answer. After extensive training up to five days, both groups' estimates moved slightly towards the mode. That is, feedback had no specific influence on participants' virtual loss function. To conclude, the virtual loss function in the summary perception of motion is close to inverse Gaussian, and it can hardly be changed by feedback.
Collapse
Affiliation(s)
- Tianyuan Teng
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China.,
| | - Sheng Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,
| | - Hang Zhang
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China.,School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Chinese Institute for Brain Research, Beijing, China.,
| |
Collapse
|
19
|
Abstract
Evidence accumulation models like the diffusion model are increasingly used by researchers to identify the contributions of sensory and decisional factors to the speed and accuracy of decision-making. Drift rates, decision criteria, and nondecision times estimated from such models provide meaningful estimates of the quality of evidence in the stimulus, the bias and caution in the decision process, and the duration of nondecision processes. Recently, Dutilh et al. (Psychonomic Bulletin & Review 26, 1051–1069, 2019) carried out a large-scale, blinded validation study of decision models using the random dot motion (RDM) task. They found that the parameters of the diffusion model were generally well recovered, but there was a pervasive failure of selective influence, such that manipulations of evidence quality, decision bias, and caution also affected estimated nondecision times. This failure casts doubt on the psychometric validity of such estimates. Here we argue that the RDM task has unusual perceptual characteristics that may be better described by a model in which drift and diffusion rates increase over time rather than turn on abruptly. We reanalyze the Dutilh et al. data using models with abrupt and continuous-onset drift and diffusion rates and find that the continuous-onset model provides a better overall fit and more meaningful parameter estimates, which accord with the known psychophysical properties of the RDM task. We argue that further selective influence studies that fail to take into account the visual properties of the evidence entering the decision process are likely to be unproductive.
Collapse
|
20
|
Abstract
There is a growing body of research on ensemble perception, or our ability to form ensemble representations based on perceptual features for stimuli of varying levels of complexity, and more recently, on ensemble cognition, which refers to our ability to perceive higher-level properties of stimuli such as facial attractiveness or gaze direction. Less is known about our ability to form ensemble representations based on more abstract properties such as the semantic meaning associated with items in a scene. Previous work examining whether the meaning associated with digits can be incorporated into summary statistical representations suggests that numerical information from digit ensembles can be extracted rapidly, and likely using a parallel processing mechanism. Here, we further investigate whether participants can accurately generate summary representations of numerical value from digit sets and explore the effect of set size on their ability to do so, by comparing psychometric functions based on a numerical averaging task in which set size varied. Steeper slopes for ten- and seven-item compared to five-item digit sets provide evidence that displays with more digits yield more reliable discrimination between larger and smaller numerical averages. Additionally, consistent with previous reports, we observed a response bias such that participants were more likely to report that the numerical average was "greater than 5" for larger compared to smaller sets. Overall, our results contribute to evidence that ensemble representations for semantic attributes may be carried out via similar mechanisms as those reported for perceptual features.
Collapse
|
21
|
Arslanova I, Wang K, Gomi H, Haggard P. Somatosensory evoked potentials that index lateral inhibition are modulated according to the mode of perceptual processing: comparing or combining multi-digit tactile motion. Cogn Neurosci 2020; 13:47-59. [PMID: 33307992 DOI: 10.1080/17588928.2020.1839403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Many perceptual studies focus on the brain's capacity to discriminate between stimuli. However, our normal experience of the world also involves integrating multiple stimuli into a single perceptual event. Neural mechanisms such as lateral inhibition are believed to enhance local differences between sensory inputs from nearby regions of the receptor surface. However, this mechanism would seem dysfunctional when sensory inputs need to be combined rather than contrasted. Here, we investigated whether the brain can strategically regulate the strength of suppressive interactions that underlie lateral inhibition between finger representations in human somatosensory processing. To do this, we compared sensory processing between conditions that required either comparing or combining information. We delivered two simultaneous tactile motion trajectories to index and middle fingertips of the right hand. Participants had to either compare the directions of the two stimuli, or to combine them to form their average direction. To reveal preparatory tuning of somatosensory cortex, we used an established event-related potential design to measure the interaction between cortical representations evoked by digital nerve shocks immediately before each tactile stimulus. Consistent with previous studies, we found a clear suppression between cortical activations when participants were instructed to compare the tactile motion directions. Importantly, this suppression was significantly reduced when participants had to combine the same stimuli. These findings suggest that the brain can strategically switch between a comparative and a combinative mode of somatosensory processing, according to the perceptual goal, by preparatorily adjusting the strength of a process akin to lateral inhibition.
Collapse
Affiliation(s)
- Irena Arslanova
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Keying Wang
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, NTT Corporation, Atsugishi, Japan
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
22
|
Abstract
In a glance, observers can evaluate gist characteristics from crowds of faces, such as the average emotional tenor or the average family resemblance. Prior research suggests that high-level ensemble percepts rely on holistic and viewpoint-invariant information. However, it is also possible that feature-based analysis was sufficient to yield successful ensemble percepts in many situations. To confirm that ensemble percepts can be extracted holistically, we asked observers to report the average emotional valence of Mooney face crowds. Mooney faces are two-tone, shadow-defined images that cannot be recognized in a part-based manner. To recognize features in a Mooney face, one must first recognize the image as a face by processing it holistically. Across experiments, we demonstrated that observers successfully extracted the average emotional valence from crowds that were spatially distributed or viewed in a rapid temporal sequence. In a subsequent set of experiments, we maximized holistic processing by including only those Mooney faces that were difficult to recognize when inverted. Under these conditions, participants remained highly sensitive to the average emotional valence of Mooney face crowds. Taken together, these experiments provide evidence that ensemble perception can operate selectively on holistic representations of human faces, even when feature-based information is not readily available.
Collapse
|
23
|
Abstract
Spatial averaging of luminances over a variegated region has been assumed in visual processes such as light adaptation, texture segmentation, and lightness scaling. Despite the importance of these processes, how mean brightness can be computed remains largely unknown. We investigated how accurately and precisely mean brightness can be compared for two briefly presented heterogeneous luminance arrays composed of different numbers of disks. The results demonstrated that mean brightness judgments can be made in a task-dependent and flexible fashion. Mean brightness judgments measured via the point of subjective equality (PSE) exhibited a consistent bias, suggesting that observers relied strongly on a subset of the disks (e.g., the highest- or lowest-luminance disks) in making their judgments. Moreover, the direction of the bias flexibly changed with the task requirements, even when the stimuli were completely the same. When asked to choose the brighter array, observers relied more on the highest-luminance disks. However, when asked to choose the darker array, observers relied more on the lowest-luminance disks. In contrast, when the task was the same, observers' judgments were almost immune to substantial changes in apparent contrast caused by changing the background luminance. Despite the bias in PSE, the mean brightness judgments were precise. The just-noticeable differences measured for multiple disks were similar to or even smaller than those for single disks, which suggested a benefit of averaging. These findings implicated flexible weighted averaging; that is, mean brightness can be judged efficiently by flexibly relying more on a few items that are relevant to the task.
Collapse
|
24
|
Lei Y, He X, Zhao T, Tian Z. Contrast Effect of Facial Attractiveness in Groups. Front Psychol 2020; 11:2258. [PMID: 33041899 PMCID: PMC7523431 DOI: 10.3389/fpsyg.2020.02258] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 08/11/2020] [Indexed: 11/13/2022] Open
Abstract
Research on facial attractiveness is an important part of aesthetics. Most relevant studies in the area have focused on the influence of individual perspectives on facial attractiveness, but it is necessary to consider the effect of contextual information on facial attractiveness. In this study, we examine the influence on attractiveness of special faces in a given group. We define a “special face” as one that is significantly different from other members of the same group in terms of facial attractiveness. We conducted three experiments to explore the influence of different modes of presentation and central positions in a group on the judgment of attractiveness of the special face. The results show the following: (1) When the special face was part of a given group, the subjects made more extreme judgments than without it: that is, they judged the most attractive face as more attractive and the least as less attractive than when faces were presented alone. (2) The subjects rated the most attractive faces lower and the least attractive faces higher when the target faces in the middle of the group than in other positions. The results favored the contrast effect: when the subjects judged the attractiveness of target stimulus, they always compared it with the environment, which then became a reference in this regard. Moreover, the greater the amount of contextual information perceived, the higher the likelihood that assimilation would occur.
Collapse
Affiliation(s)
- Yatian Lei
- School of Psychology, South China Normal University, Guangzhou, China
| | - Xianyou He
- School of Psychology, South China Normal University, Guangzhou, China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China.,Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, China
| | - Tingting Zhao
- School of Health Management, Guangzhou Medical University, Guangzhou, China
| | - Zuye Tian
- School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
25
|
Abstract
There has been a recent surge of research examining how the visual system compresses information by representing the average properties of sets of similar objects to circumvent strict capacity limitations. Efficient representation by perceptual averaging helps to maintain the balance between the needs to perceive salient events in the surrounding environment and sustain the illusion of stable and complete perception. Whereas there have been many demonstrations that the visual system encodes spatial average properties, such as average orientation, average size, and average numerosity along single dimensions, there has been no investigation of whether the fundamental nature of average representations extends to the temporal domain. Here, we used an adaptation paradigm to demonstrate that the average duration of a set of sequentially presented stimuli negatively biases the perceived duration of subsequently presented information. This negative adaptation aftereffect is indicative of a fundamental visual property, providing the first evidence that average duration is encoded along a single visual dimension. Our results not only have important implications for how the visual system efficiently encodes redundant information to evaluate salient events as they unfold within the dynamic context of the surrounding environment, but also contribute to the long-standing debate regarding the neural underpinnings of temporal encoding.
Collapse
|
26
|
Perception and decision mechanisms involved in average estimation of spatiotemporal ensembles. Sci Rep 2020; 10:1318. [PMID: 31992785 PMCID: PMC6987113 DOI: 10.1038/s41598-020-58112-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 01/10/2020] [Indexed: 11/08/2022] Open
Abstract
A number of studies on texture and ensemble perception have shown that humans can immediately estimate the average of spatially distributed visual information. The present study characterized mechanisms involved in estimating averages for information distributed over both space and time. Observers viewed a rapid sequence of texture patterns in which elements' orientation were determined by dynamic Gaussian noise with variable spatial and temporal standard deviations (SDs). We found that discrimination thresholds increased beyond a certain spatial SD if temporal SD was small, but if temporal SD was large, thresholds remained nearly constant regardless of spatial SD. These data are at odds with predictions that threshold is uniquely determined by spatiotemporal SD. Moreover, a reverse correlation analysis revealed that observers judged the spatiotemporal average orientation largely depending on the spatial average orientation over the last few frames of the texture sequence - a recency effect widely observed in studies of perceptual decision making. Results are consistent with the notion that the visual system rapidly computes spatial ensembles and adaptively accumulates information over time to make a decision on spatiotemporal average. A simple computational model based on this notion successfully replicated observed data.
Collapse
|
27
|
Jeong J, Chong SC. Adaptation to mean and variance: Interrelationships between mean and variance representations in orientation perception. Vision Res 2020; 167:46-53. [PMID: 31954877 DOI: 10.1016/j.visres.2020.01.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 12/31/2019] [Accepted: 01/03/2020] [Indexed: 11/26/2022]
Abstract
When there are many visual items, the visual system could represent their summary statistics (e.g., mean, variance) to process them efficiently. Although many previous studies have investigated the mean or variance representation itself, a relationship between these two ensemble representations has not been investigated much. In this study, we tested the potential interaction between mean and variance representations by using a visual adaptation method. We reasoned that if mean and variance representations interact with each other, an adaptation aftereffect to either mean or variance would influence the perception of the other. Participants watched a sequence of orientation arrays containing a specific statistical property during the adaptation period. To produce an adaptation aftereffect specific to variance or mean, one property of the adaptor arrays (variance or mean) had a fixed value while the other property was randomly varied. After the adaptation, participants were asked to discriminate the property of the test array that was randomly varied during the adaptation. We found that the adaptation aftereffect of orientation variance influenced the sensitivity of mean orientation discrimination (Experiment 1), and that the adaptation aftereffect of mean orientation influenced the bias of orientation variance discrimination (Experiment 2). These results suggest that mean and variance representations do closely interact with each other. Considering that mean and variance reflect the representative value and dispersion of multiple items respectively, the interactions between mean and variance representations may reflect their complementary roles to summarize complex visual information effectively.
Collapse
Affiliation(s)
- Jinhyeok Jeong
- The Graduate Program in Cognitive Science, Yonsei University, Seoul, South Korea
| | - Sang Chul Chong
- The Graduate Program in Cognitive Science, Yonsei University, Seoul, South Korea; Department of Psychology, Yonsei University, Seoul, South Korea.
| |
Collapse
|
28
|
Elucidating the Neural Representation and the Processing Dynamics of Face Ensembles. J Neurosci 2019; 39:7737-7747. [PMID: 31413074 DOI: 10.1523/jneurosci.0471-19.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 08/02/2019] [Accepted: 08/06/2019] [Indexed: 11/21/2022] Open
Abstract
Extensive behavioral work has documented the ability of the human visual system to extract summary representations from face ensembles (e.g., the average identity of a crowd of faces). Yet, the nature of such representations, their underlying neural mechanisms, and their temporal dynamics await elucidation. Here, we examine summary representations of facial identity in human adults (of both sexes) with the aid of pattern analyses, as applied to EEG data, along with behavioral testing. Our findings confirm the ability of the visual system to form such representations both explicitly and implicitly (i.e., with or without the use of specific instructions). We show that summary representations, rather than individual ensemble constituents, can be decoded from neural signals elicited by ensemble perception, we describe the properties of such representations by appeal to multidimensional face space constructs, and we visualize their content through neural-based image reconstruction. Further, we show that the temporal profile of ensemble processing diverges systematically from that of single faces consistent with a slower, more gradual accumulation of perceptual information. Thus, our findings reveal the representational basis of ensemble processing, its fine-grained visual content, and its neural dynamics.SIGNIFICANCE STATEMENT Humans encounter groups of faces, or ensembles, in a variety of environments. Previous behavioral research has investigated how humans process face ensembles as well as the types of summary representations that can be derived from them, such as average emotion, gender, and identity. However, the neural mechanisms mediating these processes are unclear. Here, we demonstrate that ensemble representations, with different facial identity summaries, can be decoded and even visualized from neural data through multivariate analyses. These results provide, to our knowledge, the first detailed investigation into the status and the visual content of neural ensemble representations of faces. Further, the current findings shed light on the temporal dynamics of face ensembles and its relationship with single-face processing.
Collapse
|
29
|
Hansmann-Roth S, Chetverikov A, Kristjánsson Á. Representing color and orientation ensembles: Can observers learn multiple feature distributions? J Vis 2019; 19:2. [DOI: 10.1167/19.9.2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Sabrina Hansmann-Roth
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Andrey Chetverikov
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
- Cognitive Research Lab, Russian Academy of National Economy and Public Administration, Moscow, Russia
| | - Árni Kristjánsson
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
30
|
Abstract
A computer joystick is an efficient and cost-effective response device for recording continuous movements in psychological experiments. Movement trajectories and other measures from continuous responses have expanded the insights gained from discrete responses (e.g., button presses) by providing unique information about how cognitive processes unfold over time. However, few studies have evaluated the validity of joystick responses with reference to conventional key presses, and how response modality can affect cognitive processes. Here we systematically compared human participants' behavioral performance of perceptual decision-making when they responded with either joystick movements or key presses in a four-alternative motion discrimination task. We found evidence that the response modality did not affect raw behavioral measures, including decision accuracy and mean response time, at the group level. Furthermore, to compare the underlying decision processes between the two response modalities, we fitted a drift-diffusion model of decision-making to individual participants' behavioral data. Bayesian analyses of the model parameters showed no evidence that switching from key presses to continuous joystick movements modulated the decision-making process. These results supported continuous joystick actions as a valid apparatus for continuous movements, although we highlight the need for caution when conducting experiments with continuous movement responses.
Collapse
|
31
|
Birch EE, Kelly KR, Giaschi DE. Fellow Eye Deficits in Amblyopia. J Binocul Vis Ocul Motil 2019; 69:116-125. [PMID: 31161888 PMCID: PMC6673659 DOI: 10.1080/2576117x.2019.1624440] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 05/23/2019] [Accepted: 05/23/2019] [Indexed: 10/26/2022]
Abstract
Amblyopia is a neurodevelopmental disorder of the visual system, as a result of discordant visual experience during infancy or early childhood. Because amblyopia is typically defined as monocularly reduced visual acuity accompanied by one or more known amblyogenic factors, it is often assumed that the fellow eye is normal and sufficient for tasks like reading and eye-hand coordination. Recent scientific evidence of ocular motor, visual, and visuomotor deficits that are present with fellow eye monocular viewing and with binocular viewing calls this assumption into question. This clinical update reviews the research that has revealed fellow ocular motor and visual deficits and the effect that these deficits have on an amblyopic child's visuomotor and visuocognitive skills. We need to understand how to prevent and rehabilitate the effects of amblyopia not only on the nonpreferred eye but also on the fellow eye.
Collapse
Affiliation(s)
- Eileen E Birch
- Crystal Charity Ball Pediatric Vision Laboratory, Retina Foundation of the Southwest, Dallas, TX, USA
- Department of Ophthalmology, UT Southwestern Medical Center, Dallas, TX, 11 USA
| | - Krista R Kelly
- Crystal Charity Ball Pediatric Vision Laboratory, Retina Foundation of the Southwest, Dallas, TX, USA
| | - Deborah E Giaschi
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
32
|
Ueda S. Effects of the Simultaneous Presentation of Corresponding Auditory and Visual Stimuli on Size Variance Perception. Iperception 2018; 9:2041669518815709. [PMID: 30559958 PMCID: PMC6291879 DOI: 10.1177/2041669518815709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 11/04/2018] [Indexed: 11/15/2022] Open
Abstract
To overcome limitations in perceptual bandwidth, humans condense various features of the environment into summary statistics. Variance constitutes indices that represent diversity within categories and also the reliability of the information regarding that diversity. Studies have shown that humans can efficiently perceive variance for visual stimuli; however, to enhance perception of environments, information about the external world can be obtained from multisensory modalities and integrated. Consequently, this study investigates, through two experiments, whether the precision of variance perception improves when visual information (size) and corresponding auditory information (pitch) are integrated. In Experiment 1, we measured the correspondence between visual size and auditory pitch for each participant by using adjustment measurements. The results showed a linear relationship between size and pitch-that is, the higher the pitch, the smaller the corresponding circle. In Experiment 2, sequences of visual stimuli were presented both with and without linked auditory tones, and the precision of perceived variance in size was measured. We consequently found that synchronized presentation of audio and visual stimuli that have the same variance improves the precision of perceived variance in size when compared with visual-only presentation. This suggests that audiovisual information may be automatically integrated in variance perception.
Collapse
Affiliation(s)
- Sachiyo Ueda
- Department of Computer Science and Engineering, Toyohashi University of Technology, Japan
| |
Collapse
|
33
|
Waskom ML, Asfour J, Kiani R. Perceptual insensitivity to higher-order statistical moments of coherent random dot motion. J Vis 2018; 18:9. [PMID: 30029220 PMCID: PMC6894413 DOI: 10.1167/18.6.9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When the visual system analyzes distributed patterns of sensory inputs, what features of those distributions does it use? It has been previously demonstrated that higher-order statistical moments of luminance distributions influence perception of static surfaces and textures. Here, we tested whether the brain also represents higher-order moments of dynamic stimuli. We constructed random dot kinematograms, where dots moved according to probability distributions that selectively differed in terms of their mean, variance, skewness, or kurtosis. When viewing these stimuli, human observers were sensitive to the mean direction of coherent motion and to the variance of dot displacement angles, but they were insensitive to skewness and kurtosis. Observer behavior accorded with a model of directional motion energy, suggesting that information about higher-order moments is discarded early in the visual processing hierarchy. These results demonstrate that use of higher-order moments is not a general property of visual perception.
Collapse
Affiliation(s)
- Michael L Waskom
- Center for Neural Science, New York University, New York, NY, USA
| | - Janeen Asfour
- Center for Neural Science, New York University, New York, NY, USA.,College of Dentistry, New York University, New York, NY, USA
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY, USA.,Neuroscience Institute, NYU Langone Medical Center, New York, NY, USA
| |
Collapse
|
34
|
Brand J, Johnson AP. The effects of distributed and focused attention on rapid scene categorization. VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1485808] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- John Brand
- Department of Epidemiology, Geisel School of Medicine Dartmouth College, Hanover, USA
| | - Aaron P. Johnson
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
35
|
Jones PR, Dekker TM. The development of perceptual averaging: learning what to do, not just how to do it. Dev Sci 2018; 21:e12584. [PMID: 28812307 PMCID: PMC5947545 DOI: 10.1111/desc.12584] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Accepted: 04/24/2017] [Indexed: 11/30/2022]
Abstract
The mature visual system condenses complex scenes into simple summary statistics (e.g., average size, location, orientation, etc.). However, children, often perform poorly on perceptual averaging tasks. Children's difficulties are typically thought to represent the suboptimal implementation of an adult-like strategy. This paper examines another possibility: that children actually make decisions in a qualitatively different way to adults (optimal implementation of a non-ideal strategy). Ninety children (6-7, 8-9, 10-11 years) and 30 adults were asked to locate the middle of randomly generated dot-clouds. Nine plausible decision strategies were formulated, and each was fitted to observers' trial-by-trial response data (Reverse Correlation). When the number of visual elements was low (N < 6), children used a qualitatively different decision strategy from adults: appearing to "join up the dots" and locate the gravitational center of the enclosing shape. Given denser displays, both children and adults used an ideal strategy of arithmetically averaging individual points. Accounting for this difference in decision strategy explained 29% of children's lower precision. These findings suggest that children are not simply suboptimal at performing adult-like computations, but may at times use sensible, but qualitatively different strategies to make perceptual judgments. Learning which strategy is best in which circumstance might be an important driving factor of perceptual development.
Collapse
Affiliation(s)
- Pete R. Jones
- Institute of OphthalmologyUniversity College London (UCL)UK
- NIHR Moorfields Biomedical Research CentreLondonUK
| | - Tessa M. Dekker
- Institute of OphthalmologyUniversity College London (UCL)UK
- Psychology and Language SciencesUniversity College London (UCL)UK
| |
Collapse
|
36
|
Kimura E. Averaging colors of multicolor mosaics. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:B43-B54. [PMID: 29603986 DOI: 10.1364/josaa.35.000b43] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Accepted: 01/09/2018] [Indexed: 06/08/2023]
Abstract
The present study investigated how color information was summarized in multicolor mosaics. The mosaics were composed of small elements of 17 colors that roughly belonged to a single color category. We manipulated the degree of color variation around the mean by varying the proportion of different color elements. Observers matched the mean color of the multicolor mosaic by adjusting the color of a spatially uniform matching stimulus. Results showed that when the color variation was large, the matched color deviated from the colorimetric mean toward the most-saturated color, although the hue of the matched color was almost the same as that of the colorimetric mean. These findings together suggested differential processing of hue and saturation. The deviation of the matched color decreased, but did not disappear, when the color variation was reduced. The analysis of color metric underlying color averaging revealed differential color scaling in nearly orthogonal blue-orange and green-purple directions, implying that the visual system does not solely rely on linear cone-opponent codes when summarizing color signals. The deviation itself was consistently found regardless of different color metrics tested. The robustness of the deviation indicated an inherent bias of mean color judgments favoring highly saturated colors.
Collapse
|
37
|
Rocchi F, Ledgeway T, Webb BS. Criterion-free measurement of motion transparency perception at different speeds. J Vis 2018; 18:5. [PMID: 29614154 PMCID: PMC5886031 DOI: 10.1167/18.4.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception.
Collapse
Affiliation(s)
- Francesca Rocchi
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| | - Timothy Ledgeway
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| | - Ben S Webb
- Visual Neuroscience Group, School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
38
|
Piras A, Raffi M, Perazzolo M, Squatrito S. Influence of heading perception in the control of posture. J Electromyogr Kinesiol 2018; 39:89-94. [PMID: 29454231 DOI: 10.1016/j.jelekin.2018.02.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2017] [Revised: 12/24/2017] [Accepted: 02/09/2018] [Indexed: 10/18/2022] Open
Abstract
The optic flow visual input directly influences the postural control. The aim of the present study was to examine the relationship between visually induced heading perception and postural stability, using optic flow stimulation. The dots were accelerated to simulate a heading direction to the left or to the right of the vertical midline. The participants were instructed to indicate the perceived optic flow direction by making a saccade to the simulated heading direction. We simultaneously acquired electromyographyc and center of pressure (COP) signals. We analysed the postural sway during three different epochs: (i) the first 500 ms after the stimulus onset, (ii) 500 ms before saccade onset, epoch in which the perception is achieved and, (iii) 500 ms after saccade onset. Participants exhibited a greater postural instability before the saccade, when the perception of heading was achieved, and the sway increased further after the saccade. These results indicate that the conscious representation of the self-motion affects the neural control of posture more than the mere visual motion, producing more instability when visual signals are contrasting with eye movements. It could be that part of these effects are due to the interactions between gaze shift and optic flow.
Collapse
Affiliation(s)
- Alessandro Piras
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy.
| | - Milena Raffi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Monica Perazzolo
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Salvatore Squatrito
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
39
|
Neumann MF, Ng R, Rhodes G, Palermo R. Ensemble coding of face identity is not independent of the coding of individual identity. Q J Exp Psychol (Hove) 2018; 71:1357-1366. [DOI: 10.1080/17470218.2017.1318409] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation—of access to ensemble information in the absence of detailed exemplar information—that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Collapse
Affiliation(s)
- Markus F Neumann
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, The University of Western Australia, Crawley, WA, Australia
| | - Ryan Ng
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, The University of Western Australia, Crawley, WA, Australia
| | - Gillian Rhodes
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, The University of Western Australia, Crawley, WA, Australia
| | - Romina Palermo
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, The University of Western Australia, Crawley, WA, Australia
| |
Collapse
|
40
|
Abstract
To understand visual consciousness, we must understand how the brain represents ensembles of objects at many levels of perceptual analysis. Ensemble perception refers to the visual system's ability to extract summary statistical information from groups of similar objects-often in a brief glance. It defines foundational limits on cognition, memory, and behavior. In this review, we provide an operational definition of ensemble perception and demonstrate that ensemble perception spans across multiple levels of visual analysis, incorporating both low-level visual features and high-level social information. Further, we investigate the functional usefulness of ensemble perception and its efficiency, and we consider possible physiological and cognitive mechanisms that underlie an individual's ability to make accurate and rapid assessments of crowds of objects.
Collapse
Affiliation(s)
- David Whitney
- Department of Psychology, University of California, Berkeley, California 94720; .,Vision Science Program, University of California, Berkeley, California 94720.,Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720
| | | |
Collapse
|
41
|
Abstract
Crowds of emotional faces are ubiquitous, so much so that the visual system utilizes a specialized mechanism known as ensemble coding to see them. In addition to being proximally close, members of emotional crowds, such as a laughing audience or an angry mob, often behave together. The manner in which crowd members behave-in sync or out of sync-may be critical for understanding their collective affect. Are ensemble mechanisms sensitive to these dynamic properties of groups? Here, observers estimated the average emotion of a crowd of dynamic faces. The members of some crowds changed their expressions synchronously, whereas individuals in other crowds acted asynchronously. Observers perceived the emotion of a synchronous group more precisely than the emotion of an asynchronous crowd or even a single dynamic face. These results demonstrate that ensemble representation is particularly sensitive to coordinated behavior, and they suggest that shared behavior is critical for understanding emotion in groups.
Collapse
Affiliation(s)
- Elric Elias
- 1 Department of Psychology, University of Denver
| | | | | |
Collapse
|
42
|
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys. J Neurosci 2016; 37:1394-1412. [PMID: 28003348 DOI: 10.1523/jneurosci.2682-16.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Revised: 12/03/2016] [Accepted: 12/10/2016] [Indexed: 11/21/2022] Open
Abstract
Despite the enduring interest in motion integration, a direct measure of the space-time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus-response correlations across space and time, computing the linear space-time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms.SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space-time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing.
Collapse
|
43
|
|
44
|
Sakuma N, Kimura E, Goryo K. Rapid proportion comparison with spatial arrays of frequently used meaningful visual symbols. Q J Exp Psychol (Hove) 2016; 70:2371-2385. [PMID: 27775482 DOI: 10.1080/17470218.2016.1239747] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
It has been shown that when two arrays of Arabic numerals were briefly presented, observers could accurately indicate which array contained the larger number of a target numeral. This study investigated whether this rapid proportion comparison can be extended to other meaningful symbols that share some of notable properties of Arabic numerals. We tested mainly several Japanese Kanji letters, each of which represents a meaning and can work as a word. Using physically identical stimulus sets that could be interpreted as different types of letters, Experiment 1 first confirmed the rapid proportion comparison with Arabic numerals for Japanese participants. Experiment 2 showed that the rapid proportion comparison can be extended to Kanji numerals. Experiment 3 successfully demonstrated that rapid proportion judgments can be found with non-quantitative Kanji letters that are used frequently. Experiment 4 further demonstrated the rapid proportion comparison with frequently used meaningful non-letter symbols (gender icons). The rapid processing cannot be attributed to fluent processing of familiar items, because it was not found with familiar phonograms (Japanese Kana letters). These findings suggest that the rapid proportion comparison can be commonly found with frequently used meaningful symbols, even though their meaning is not relevant to the task.
Collapse
Affiliation(s)
- Naoto Sakuma
- a Graduate School of Humanities and Social Sciences , Chiba University , Chiba , Japan
| | - Eiji Kimura
- b Department of Psychology, Faculty of Letters , Chiba University , Chiba , Japan
| | - Ken Goryo
- b Department of Psychology, Faculty of Letters , Chiba University , Chiba , Japan
| |
Collapse
|
45
|
Li H, Ji L, Tong K, Ren N, Chen W, Liu CH, Fu X. Processing of Individual Items during Ensemble Coding of Facial Expressions. Front Psychol 2016; 7:1332. [PMID: 27656154 PMCID: PMC5013048 DOI: 10.3389/fpsyg.2016.01332] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 08/19/2016] [Indexed: 11/17/2022] Open
Abstract
There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 ms for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.
Collapse
Affiliation(s)
- Huiyun Li
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of SciencesBeijing, China; University of Chinese Academy of SciencesBeijing, China
| | - Luyan Ji
- Department of Experimental Clinical and Health Psychology, Ghent University Ghent, Belgium
| | - Ke Tong
- Department of Psychology, University of South Florida, Tampa FL, USA
| | - Naixin Ren
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of SciencesBeijing, China; University of Chinese Academy of SciencesBeijing, China
| | - Wenfeng Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences Beijing, China
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University Poole, UK
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences Beijing, China
| |
Collapse
|
46
|
Attarha M, Moore CM, Vecera SP. The time-limited visual statistician. J Exp Psychol Hum Percept Perform 2016; 42:1497-504. [PMID: 27336630 DOI: 10.1037/xhp0000255] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The visual system can calculate summary statistics over time. For example, the multiple frames of a movie showing a dynamically changing disk can be collapsed to form a single representation of that disk's mean size. Summary representations of dynamic information may engage online updating processes that establish a running average of the mean by continuously adjusting the persisting representation of the average in tandem with the arrival of incoming information. Alternatively, summary representations may involve subsampling strategies that reflect limitations in the degree to which the visual system can integrate information over time. Observers watched movies of a disk that changed size smoothly at different rates and then reported the disk's average size by adjusting the diameter of a response disk. Critically, the movie varied in duration. Size estimates depended on the duration of the movie. They were constant and fairly accurate for movie durations up to approximately 600 ms, at which point accuracy decreased with increasing duration to imprecise levels by about 1,000 ms. Summary statistics established over time are unlikely to be updated continuously and may instead be restricted by subsampling processes, such as limited temporal windows of integration. (PsycINFO Database Record
Collapse
Affiliation(s)
- Mouna Attarha
- Department of Psychological and Brain Sciences, University of Iowa
| | - Cathleen M Moore
- Department of Psychological and Brain Sciences, University of Iowa
| | - Shaun P Vecera
- Department of Psychological and Brain Sciences, University of Iowa
| |
Collapse
|
47
|
Abstract
Observers judged the motion coherence of randomdot cinematograms Theoretical models were developed for coherence matches between cinematograms constructed from different angle distributions Evidence is presented that coherence matches are made on the basis of the Shannon-Wiener information entropy We show how the formal structure of information theory may be used to predict perceived pattern goodness when the underlying distributions of pattern alternatives are implicit in the judgment task
Collapse
|
48
|
Price NSC, VanCuylenberg JB. Noisy decision thresholds can account for suboptimal detection of low coherence motion. Sci Rep 2016; 6:18700. [PMID: 26726736 PMCID: PMC4698657 DOI: 10.1038/srep18700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 11/23/2015] [Indexed: 11/09/2022] Open
Abstract
Noise in sensory signals can vary over both space and time. Moving random dot stimuli are commonly used to quantify how the visual system accounts for spatial noise. In these stimuli, a fixed proportion of "signal" dots move in the same direction and the remaining "noise" dots are randomly replotted. The spatial coherence, or proportion of signal versus noise dots, is fixed across time; however, this means that little is known about how temporally-noisy signals are integrated. Here we use a stimulus with low temporal coherence; the signal direction is only presented on a fraction of frames. Human observers are able to reliably detect and discriminate the direction of a 200 ms motion pulse, even when just 25% of frames within the pulse move in the signal direction. Using psychophysical reverse-correlation analyses, we show that observers are strongly influenced by the number of near-target directions spread throughout the pulse, and that consecutive signal frames have only a small additional influence on perception. Finally, we develop a model inspired by the leaky integration of the responses of direction-selective neurons, which reliably represents motion direction, and which can account for observers' sub-optimal detection of motion pulses by incorporating a noisy decision threshold.
Collapse
|
49
|
Abstract
The simultaneous-sequential method was used to test the processing capacity of establishing mean orientation summaries. Four clusters of oriented Gabor patches were presented in the peripheral visual field. One of the clusters had a mean orientation that was tilted either left or right, whereas the mean orientations of the other three clusters were roughly vertical. All four clusters were presented at the same time in the simultaneous condition, whereas the clusters appeared in temporal subsets of two in the sequential condition. Performance was lower when the means of all four clusters had to be processed concurrently than when only two had to be processed in the same amount of time. The advantage for establishing fewer summaries at a given time indicates that the processing of mean orientation engages limited-capacity processes (Exp. 1). This limitation cannot be attributed to crowding, low target-distractor discriminability, or a limited-capacity comparison process (Exps. 2 and 3). In contrast to the limitations of establishing multiple summary representations, establishing a single summary representation unfolds without interference (Exp. 4). When interpreted in the context of recent work on the capacity of summary statistics, these findings encourage a reevaluation of the view that early visual perception consists of creating summary statistic representations that unfold independently across multiple areas of the visual field.
Collapse
|
50
|
Abstract
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic.
Collapse
|