1
|
Subjectively salient faces differ from emotional faces: ERP evidence. Sci Rep 2024; 14:3634. [PMID: 38351111 PMCID: PMC10864357 DOI: 10.1038/s41598-024-54215-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 02/09/2024] [Indexed: 02/16/2024] Open
Abstract
The self-face is processed differently than emotional faces. A question arises whether other highly familiar and subjectively significant non-self faces (e.g. partner's face) are also differentiated from emotional faces. The aim of this event-related potential (ERP) study was to investigate the neural correlates of personally-relevant faces (the self and a close-other's) as well as emotionally positive (happy) and neutral faces. Participants were tasked with the simple detection of faces. Amplitudes of N170 were more negative in the right than in the left hemisphere and were not modulated by type of face. A similar pattern of N2 and P3 results for the self-face and close-other's face was observed: they were associated with decreased N2 and increased P3 relative to happy and neutral faces. However, the self-face was preferentially processed also when compared to a close-other's face as revealed by lower N2 and higher P3 amplitudes. Nonparametric cluster-based permutation tests showed an analogous pattern of results: significant clusters for the self-face compared with all other faces (close-other's, happy, neutral) and for close-other's face compared to happy and neutral faces. In summary, the self-face prioritization was observed, as indicated by significant differences between one's own face and all other faces. Crucially, both types of personally-relevant faces differed from happy faces. These findings point to the pivotal role of subjective evaluation of the saliency factor.
Collapse
|
2
|
Learning modifies attention during bumblebee visual search. Behav Ecol Sociobiol 2024; 78:22. [PMID: 38333735 PMCID: PMC10847365 DOI: 10.1007/s00265-024-03432-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 12/15/2023] [Accepted: 01/12/2024] [Indexed: 02/10/2024]
Abstract
Abstract The role of visual search during bee foraging is relatively understudied compared to the choices made by bees. As bees learn about rewards, we predicted that visual search would be modified to prioritise rewarding flowers. To test this, we ran an experiment testing how bee search differs in the initial and later part of training as they learn about flowers with either higher- or lower-quality rewards. We then ran an experiment to see how this prior training with reward influences their search on a subsequent task with different flowers. We used the time spent inspecting flowers as a measure of attention and found that learning increased attention to rewards and away from unrewarding flowers. Higher quality rewards led to decreased attention to non-flower regions, but lower quality rewards did not. Prior experience of lower rewards also led to more attention to higher rewards compared to unrewarding flowers and non-flower regions. Our results suggest that flowers would elicit differences in bee search behaviour depending on the sugar content of their nectar. They also demonstrate the utility of studying visual search and have important implications for understanding the pollination ecology of flowers with different qualities of reward. Significance statement Studies investigating how foraging bees learn about reward typically focus on the choices made by the bees. How bees deploy attention and visual search during foraging is less well studied. We analysed flight videos to characterise visual search as bees learn which flowers are rewarding. We found that learning increases the focus of bees on flower regions. We also found that the quality of the reward a flower offers influences how much bees search in non-flower areas. This means that a flower with lower reward attracts less focussed foraging compared to one with a higher reward. Since flowers do differ in floral reward, this has important implications for how focussed pollinators will be on different flowers. Our approach of looking at search behaviour and attention thus advances our understanding of the cognitive ecology of pollination. Supplementary Information The online version contains supplementary material available at 10.1007/s00265-024-03432-z.
Collapse
|
3
|
Patterns of saliency and semantic features distinguish gaze of expert and novice viewers of surveillance footage. Psychon Bull Rev 2024:10.3758/s13423-024-02454-y. [PMID: 38273144 DOI: 10.3758/s13423-024-02454-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/04/2024] [Indexed: 01/27/2024]
Abstract
When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people. Experienced forensic examiners - Closed Circuit Television (CCTV) operators - have been shown to convey superior performance in identifying and predicting hostile intentions from surveillance footage than novices. However, it remains largely unknown what visual content CCTV operators actively attend to, and whether CCTV operators develop different strategies for active information seeking from what novices do. Here, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when viewing the same surveillance footage. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that the looking behavior of CCTV operators differs from novices by actively attending to visual contents with different patterns of saliency and semantic features. Expertise in selectively utilizing informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.
Collapse
|
4
|
Retinal eccentricity modulates saliency-driven but not relevance-driven visual selection. Atten Percept Psychophys 2024:10.3758/s13414-024-02848-z. [PMID: 38273181 DOI: 10.3758/s13414-024-02848-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 01/27/2024]
Abstract
Where we move our eyes during visual search is controlled by the relative saliency and relevance of stimuli in the visual field. However, the visual field is not homogeneous, as both sensory representations and attention change with eccentricity. Here we present an experiment investigating how eccentricity differences between competing stimuli affect saliency- and relevance-driven selection. Participants made a single eye movement to a predefined orientation singleton target that was simultaneously presented with an orientation singleton distractor in a background of multiple homogenously oriented other items. The target was either more or less salient than the distractor. Moreover, each of the two singletons could be presented at one of three different retinal eccentricities, such that both were presented at the same eccentricity, one eccentricity value apart, or two eccentricity values apart. The results showed that selection was initially determined by saliency, followed after about 300 ms by relevance. In addition, observers preferred to select the closer over the more distant singleton, and this central selection bias increased with increasing eccentricity difference. Importantly, it largely emerged within the same time window as the saliency effect, thereby resulting in a net reduction of the influence of saliency on the selection outcome. In contrast, the relevance effect remained unaffected by eccentricity. Together, these findings demonstrate that eccentricity is a major determinant of selection behavior, even to the extent that it modifies the relative contribution of saliency in determining where people move their eyes.
Collapse
|
5
|
The influence of stereopsis on visual saliency in a proto-object based model of selective attention. Vision Res 2023; 212:108304. [PMID: 37542763 PMCID: PMC10592191 DOI: 10.1016/j.visres.2023.108304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 08/07/2023]
Abstract
Some animals including humans use stereoscopic vision which reconstructs spatial information about the environment from the disparity between images captured by eyes in two separate adjacent locations. Like other sensory information, such stereoscopic information is expected to influence attentional selection. We develop a biologically plausible model of binocular vision to study its effect on bottom-up visual attention, i.e., visual saliency. In our model, the scene is organized in terms of proto-objects on which attention acts, rather than on unbound sets of elementary features. We show that taking into account the stereoscopic information improves the performance of the model in the prediction of human eye movements with statistically significant differences.
Collapse
|
6
|
Eye movement evidence for the V1 Saliency Hypothesis and the Central-peripheral Dichotomy theory in an anomalous visual search task. Vision Res 2023; 212:108308. [PMID: 37659334 DOI: 10.1016/j.visres.2023.108308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 08/01/2023] [Accepted: 08/01/2023] [Indexed: 09/04/2023]
Abstract
Typically, searching for a target among uniformly tilted non-targets is easier when this target is perpendicular, rather than parallel, to the non-targets. The V1 Saliency Hypothesis (V1SH) - that V1 creates a saliency map to guide attention exogenously - predicts exactly the opposite in a special case: each target or non-target is a pair of equally-sized disks, a homo-pair of two disks of the same color, black or white, or a hetero-pair of two disks of the opposite color; the inter-disk displacement defines its orientation. This prediction - parallel advantage - was supported by the finding that parallel targets require shorter reaction times (RTs) to report targets' locations. Furthermore, it is stronger for targets further from the center of search images, as predicted by the Central-peripheral Dichotomy (CPD) theory entailing that saliency effects are stronger in peripheral than in central vision. However, the parallel advantage could arise from a shorter time required to recognize - rather than to shift attention to - the parallel target. By gaze tracking, the present study confirms that the parallel advantage is solely due to the RTs for the gaze to reach the target. Furthermore, when the gaze is sufficiently far from the target during search, saccade to a parallel, rather than perpendicular, target is more likely, demonstrating the Central-peripheral Dichotomy more directly. Parallel advantage is stronger among observers encouraged to let their search be guided by spontaneous gaze shifts, which are presumably guided by bottom-up saliency rather than top-down factors.
Collapse
|
7
|
Stakes of neuromorphic foveation: a promising future for embedded event cameras. BIOLOGICAL CYBERNETICS 2023; 117:389-406. [PMID: 37733033 DOI: 10.1007/s00422-023-00974-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/22/2023]
Abstract
Foveation can be defined as the organic action of directing the gaze towards a visual region of interest to acquire relevant information selectively. With the recent advent of event cameras, we believe that taking advantage of this visual neuroscience mechanism would greatly improve the efficiency of event data processing. Indeed, applying foveation to event data would allow to comprehend the visual scene while significantly reducing the amount of raw data to handle. In this respect, we demonstrate the stakes of neuromorphic foveation theoretically and empirically across several computer vision tasks, namely semantic segmentation and classification. We show that foveated event data have a significantly better trade-off between quantity and quality of the information conveyed than high- or low-resolution event data. Furthermore, this compromise extends even over fragmented datasets. Our code is publicly available online at: https://github.com/amygruel/FoveationStakes_DVS .
Collapse
|
8
|
Testing the saliency-based account of phasic alertness. Psychon Bull Rev 2023; 30:1857-1865. [PMID: 37069423 DOI: 10.3758/s13423-023-02292-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2023] [Indexed: 04/19/2023]
Abstract
As an essential component of the human attention system, the effect of phasic alertness refers to the change of performance with the presence of a preceding warning signal. Weinbach and Henik (Cognition, 133 (2), 414-419, 2014) argued that phasic alertness is an adaptive mechanism that diverts attention to salient events. This mechanism enhances selective attention when the critical event is more salient than others. When selective attention to less salient details is required, phasic alertness can lead to more interference from task-irrelevant information. The experiment on which this saliency-based account of phasic alertness is based has not been replicated. In two experiments, the present study attempted to replicate the alertness-related findings of Weinbach and Henik. Although we used a similar design, the results did not reveal evidence for an interaction between phasic alertness and response congruency in the global/local processing task. Our results do not support the saliency-based account of phasic alertness. We argue that more systematic investigation is needed for this phasic alertness account.
Collapse
|
9
|
Possibility of additive effects by the presentation of visual information related to distractor sounds on the contra-sound effects of the N100m responses. Hear Res 2023; 434:108778. [PMID: 37105052 DOI: 10.1016/j.heares.2023.108778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 04/13/2023] [Accepted: 04/21/2023] [Indexed: 04/29/2023]
Abstract
Auditory-evoked responses can be affected by different types of contralateral sounds or by attention modulation. The present study examined the additive effects of presenting visual information about contralateral sounds as distractions during dichotic listening tasks on the contralateral effects of N100m responses in the auditory-evoked cortex in 16 subjects (12 males and 4 females). In magnetoencephalography, a tone-burst of 500 ms duration at a frequency of 1000 Hz was played to the left ear at a level of 70 dB as a stimulus to elicit the N100m response, and a movie clip was used as a distractor stimulus under audio-only, visual-only, and audio-visual conditions. Subjects were instructed to pay attention to the left ear and press the response button each time they heard a tone-burst stimulus in their left ear. The results suggest that the presentation of visual information related to the contralateral sound, which worked as a distractor, significantly suppressed the amplitude of the N100m response compared with only the contralateral sound condition. In contrast, the presentation of visual information related to contralateral sound did not affect the latency of the N100m response. These results suggest that the integration of contralateral sounds and related movies may have resulted in a more perceptually loaded stimulus and reduced the intensity of attention to tone-bursts. Our findings suggest that selective attention and saliency mechanisms may have cross-modal effects on other modes of perception.
Collapse
|
10
|
Impact of neovascular age-related macular degeneration on eye-movement control during scene viewing: Viewing biases and guidance by visual salience. Vision Res 2022; 201:108105. [PMID: 36081228 DOI: 10.1016/j.visres.2022.108105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 06/06/2022] [Accepted: 07/19/2022] [Indexed: 01/25/2023]
Abstract
Human vision requires us to analyze the visual periphery to decide where to fixate next. In the present study, we investigated this process in people with age-related macular degeneration (AMD). In particular, we examined viewing biases and the extent to which visual salience guides fixation selection during free-viewing of naturalistic scenes. We used an approach combining generalized linear mixed modeling (GLMM) with a-priori scene parcellation. This method allows one to investigate group differences in terms of scene coverage and observers' well-known tendency to look at the center of scene images. Moreover, it allows for testing whether image salience influences fixation probability above and beyond what can be accounted for by the central bias. Compared with age-matched normally sighted control subjects (and young subjects), AMD patients' viewing behavior was less exploratory, with a stronger central fixation bias. All three subject groups showed a salience effect on fixation selection-higher-salience scene patches were more likely to be fixated. Importantly, the salience effect for the AMD group was of similar size as the salience effect for the control group, suggesting that guidance by visual salience was still intact. The variances for by-subject random effects in the GLMM indicated substantial individual differences. A separate model exclusively considered the AMD data and included fixation stability as a covariate, with the results suggesting that reduced fixation stability was associated with a reduced impact of visual salience on fixation selection.
Collapse
|
11
|
Learned low priority of attention after training to suppress color singleton distractor. Atten Percept Psychophys 2022; 85:814-824. [PMID: 36175765 DOI: 10.3758/s13414-022-02571-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/06/2022] [Indexed: 11/08/2022]
Abstract
Allocating attention to significant events, such as a salient object, is effortless. Our brain is effective on this type of processing because doing so is generally beneficial for survival. However, a salient object could also be distracting and ignoring it costs a large amount of cognitive resource. In the present study, we conducted two behavioral experiments to investigate the effect of learned suppression of a salient color. Particularly, we were interested in the effect of learning in a new task context in which the previously suppressed color was task irrelevant. In Experiment 1, we trained the participants for five days with explicit instruction to suppress a color singleton distractor in a visual search task. We measured the effect of training with a dot probe task before and after the training. Colors in the dot probe task only served as the background and were not associated with the position of the target dot. However, we found that attention was involuntarily biased away from the previously suppressed color. In Experiment 2, the color singleton could either be the target or distractor in the visual search task, making the suppression of the color singleton inefficient for task performance. The results showed no training effect in the dot probe task after this manipulation. These findings provided direct evidence for the learned low priority of attention after training to suppress the color singleton distractor.
Collapse
|
12
|
Pupillary responses to differences in luminance, color and set size. Exp Brain Res 2022; 240:1873-1885. [PMID: 35445861 DOI: 10.1007/s00221-022-06367-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 04/05/2022] [Indexed: 11/26/2022]
Abstract
The pupil responds to a salient stimulus appearing in the environment, in addition to its modulation by global luminance. These pupillary responses can be evoked by visual or auditory stimuli, scaled with stimulus salience, and enhanced by multisensory presentation. In addition, pupil size is modulated by various visual stimulus attributes, such as color, area, and motion. However, research that concurrently examines the influence of different factors on pupillary responses is limited. To explore how presentation of multiple visual stimuli influences human pupillary responses, we presented arrays of visual stimuli and systematically varied their luminance, color, and set size. Saliency level, computed by the saliency model, systematically changed with set size across all conditions, with higher saliency levels in larger set sizes. Pupillary constriction responses were evoked by the appearance of visual stimuli, with larger pupillary responses observed in larger set size. These effects were pronounced even though the global luminance level was unchanged using isoluminant chromatic stimuli. Furthermore, larger pupillary constriction responses were obtained in the blue, compared to other color conditions. Together, we argue that both cortical and subcortical areas contribute to the observed pupillary constriction modulated by set size and color.
Collapse
|
13
|
The elephant in the room: attention to salient scene features increases with comedic expertise. Cogn Process 2022; 23:203-215. [PMID: 35267116 DOI: 10.1007/s10339-022-01079-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/25/2022] [Indexed: 11/03/2022]
Abstract
What differentiates the joke writing strategy employed by professional comedians from non-comedians? Previous MRI work found that professional comedians relied to a greater extent on "bottom-up processes," i.e., associations driven by the prompt stimuli themselves, while controls relied more on prefrontal lobe directed, "top-down" processes. In the present work, professional improv comedians and controls generated humorous captions to cartoons while their eye movements were tracked. Participants' visual fixation patterns were compared to predictions of the saliency model (Harel et al. in Adv Neural Inf Process Syst 19:545-552, 2007)-a computer model for identifying the most salient locations in an image based on visual features. Captions generated by the participants were rated for funniness by independent raters. Relative to controls, professional comedians' gaze was driven to a greater extent by the cartoons' salient visual features. For all participants, captions' funniness positively correlated with visual attention to salient cartoon features. Results suggest that comedic expertise is associated with increased reliance on bottom-up, stimulus-driven creativity, and that a bottom-up strategy results, on average, in funnier captions whether employed by comedians or controls. The cognitive processes underlying successful comedic creativity appear to adhere to the old comedians' adage "pay attention to the elephant in the room."
Collapse
|
14
|
Saliency determines the integration of contextual information into stimulus-response episodes. Atten Percept Psychophys 2022; 84:1264-1285. [PMID: 35048312 PMCID: PMC9076722 DOI: 10.3758/s13414-021-02428-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2021] [Indexed: 12/03/2022]
Abstract
When humans perform a task, it has been shown that elements of this task, like stimulus (e.g., target and distractor) and response, are bound together into a common episodic representation called stimulus–response episode (or event file). Recently, the context, a completely task-irrelevant stimulus, was found to be integrated into an episode as well. However, instead of being bound directly with the response in a binary fashion, the context modulates the binary binding between the distractor and response. This finding raises the questions of whether the context can also enter into a binary binding with the response, and if so, what determines the way of its integration. In order to resolve these questions, saliency of the context was manipulated in three experiments by changing the loudness (Experiment 1) and emotional valence (Experiment 2A and 2B) of the context. All experiments implemented the four-alternative auditory negative priming paradigm introduced by Mayr and Buchner (2006, Journal of Experimental Psychology: Human Perception and Performance, 32[4], 932–943). Results showed that the integration of context changed as a function of its saliency level. Specifically, the context of low saliency was not bound at all, the context of moderate saliency modulated the binary binding between the distractor and response, whereas the context of high saliency entered into a binary binding with the response. The current results extend a previous finding by Hommel (2004, Trends in Cognitive Sciences, 8[11], 494–500) that there is a saliency threshold which determines whether a stimulus is bound or not, by suggesting that a second threshold determines the specific structure (i.e., binary vs. configural) of the resulting binding.
Collapse
|
15
|
Parallel Advantage: Further Evidence for Bottom-up Saliency Computation by Human Primary Visual Cortex. Perception 2022; 51:60-69. [PMID: 35025626 PMCID: PMC8938995 DOI: 10.1177/03010066211062583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Finding a target among uniformly oriented non-targets is typically faster when this target is perpendicular, rather than parallel, to the non-targets. The V1 Saliency Hypothesis (V1SH), that neurons in the primary visual cortex (V1) signal saliency for exogenous attentional attraction, predicts exactly the opposite in a special case: each target or non-target comprises two equally sized disks displaced from each other by 1.2 disk diameters center-to-center along a line defining its orientation. A target has two white or two black disks. Each non-target has one white disk and one black disk, and thus, unlike the target, activates V1 neurons less when its orientation is parallel rather than perpendicular to the neurons' preferred orientations. When the target is parallel, rather than perpendicular, to the uniformly oriented non-targets, the target's evoked V1 response escapes V1's iso-orientation surround suppression, making the target more salient. I present behavioral observations confirming this prediction.
Collapse
|
16
|
Five weeks of intermittent transcutaneous vagus nerve stimulation shape neural networks: a machine learning approach. Brain Imaging Behav 2021; 16:1217-1233. [PMID: 34966977 PMCID: PMC9107416 DOI: 10.1007/s11682-021-00572-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 09/26/2021] [Indexed: 11/03/2022]
Abstract
Invasive and transcutaneous vagus nerve stimulation [(t)-VNS] have been used to treat epilepsy, depression and migraine and has also shown effects on metabolism and body weight. To what extent this treatment shapes neural networks and how such network changes might be related to treatment effects is currently unclear. Using a pre-post mixed study design, we applied either a tVNS or sham stimulation (5 h/week) in 34 overweight male participants in the context of a study designed to assess effects of tVNS on body weight and metabolic and cognitive parameters resting state (rs) fMRI was measured about 12 h after the last stimulation period. Support vector machine (SVM) classification was applied to fractional amplitude low-frequency fluctuations (fALFF) on established rs-networks. All classification results were controlled for random effects and overfitting. Finally, we calculated multiple regressions between the classification results and reported food craving. We found a classification accuracy (CA) of 79 % in a subset of four brainstem regions suggesting that tVNS leads to lasting changes in brain networks. Five of eight salience network regions yielded 76,5 % CA. Our study shows tVNS’ post-stimulation effects on fALFF in the salience rs-network. More detailed investigations of this effect and their relationship with food intake seem reasonable for future studies.
Collapse
|
17
|
Simplified, interpretable graph convolutional neural networks for small molecule activity prediction. J Comput Aided Mol Des 2021; 36:391-404. [PMID: 34817762 PMCID: PMC9325818 DOI: 10.1007/s10822-021-00421-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 09/24/2021] [Indexed: 12/11/2022]
Abstract
We here present a streamlined, explainable graph convolutional neural network (gCNN) architecture for small molecule activity prediction. We first conduct a hyperparameter optimization across nearly 800 protein targets that produces a simplified gCNN QSAR architecture, and we observe that such a model can yield performance improvements over both standard gCNN and RF methods on difficult-to-classify test sets. Additionally, we discuss how reductions in convolutional layer dimensions potentially speak to the “anatomical” needs of gCNNs with respect to radial coarse graining of molecular substructure. We augment this simplified architecture with saliency map technology that highlights molecular substructures relevant to activity, and we perform saliency analysis on nearly 100 data-rich protein targets. We show that resultant substructural clusters are useful visualization tools for understanding substructure-activity relationships. We go on to highlight connections between our models’ saliency predictions and observations made in the medicinal chemistry literature, focusing on four case studies of past lead finding and lead optimization campaigns.
Collapse
|
18
|
ResMem-Net: memory based deep CNN for image memorability estimation. PeerJ Comput Sci 2021; 7:e767. [PMID: 34825056 PMCID: PMC8594589 DOI: 10.7717/peerj-cs.767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Accepted: 10/12/2021] [Indexed: 06/13/2023]
Abstract
Image memorability is a very hard problem in image processing due to its subjective nature. But due to the introduction of Deep Learning and the large availability of data and GPUs, great strides have been made in predicting the memorability of an image. In this paper, we propose a novel deep learning architecture called ResMem-Net that is a hybrid of LSTM and CNN that uses information from the hidden layers of the CNN to compute the memorability score of an image. The intermediate layers are important for predicting the output because they contain information about the intrinsic properties of the image. The proposed architecture automatically learns visual emotions and saliency, shown by the heatmaps generated using the GradRAM technique. We have also used the heatmaps and results to analyze and answer one of the most important questions in image memorability: "What makes an image memorable?". The model is trained and evaluated using the publicly available Large-scale Image Memorability dataset (LaMem) from MIT. The results show that the model achieves a rank correlation of 0.679 and a mean squared error of 0.011, which is better than the current state-of-the-art models and is close to human consistency (p = 0.68). The proposed architecture also has a significantly low number of parameters compared to the state-of-the-art architecture, making it memory efficient and suitable for production.
Collapse
|
19
|
Towards a unified neural mechanism for reactive adaptive behaviour. Prog Neurobiol 2021; 204:102115. [PMID: 34175406 PMCID: PMC7611662 DOI: 10.1016/j.pneurobio.2021.102115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 06/17/2021] [Accepted: 06/22/2021] [Indexed: 11/27/2022]
Abstract
Surviving in natural environments requires animals to sense sudden events and swiftly adapt behaviour accordingly. The study of such Reactive Adaptive Behaviour (RAB) has been central to a number of research streams, all orbiting around movement science but progressing in parallel, with little cross-field fertilization. We first provide a concise review of these research streams, independently describing four types of RAB: (1) cortico-muscular resonance, (2) stimulus locked response, (3) online motor correction and (4) action stopping. We then highlight remarkable similarities across these four RABs, suggesting that they might be subserved by the same neural mechanism, and propose directions for future research on this topic.
Collapse
|
20
|
Cortical interaction of bilateral inputs is similar for noxious and innocuous stimuli but leads to different perceptual effects. Exp Brain Res 2021; 239:2803-2819. [PMID: 34279670 DOI: 10.1007/s00221-021-06175-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 07/10/2021] [Indexed: 12/20/2022]
Abstract
The cerebral integration of somatosensory inputs from multiple sources is essential to produce adapted behaviors. Previous studies suggest that bilateral somatosensory inputs interact differently depending on stimulus characteristics, including their noxious nature. The aim of this study was to clarify how bilateral inputs evoked by noxious laser stimuli, noxious shocks, and innocuous shocks interact in terms of perception and brain responses. The experiment comprised two conditions (right-hand stimulation and concurrent stimulation of both hands) in which painful laser stimuli, painful shocks and non-painful shocks were delivered. Perception, somatosensory-evoked potentials (P45, N100, P260), laser-evoked potentials (N1, N2 and P2) and event-related spectral perturbations (delta to gamma oscillation power) were compared between conditions and stimulus modalities. The amplitude of negative vertex potentials (N2 or N100) and the power of delta/theta oscillations were increased in the bilateral compared with unilateral condition, regardless of the stimulus type (P < 0.01). However, gamma oscillation power increased for painful and non-painful shocks (P < 0.01), but not for painful laser stimuli (P = 0.08). Despite the similarities in terms of brain activity, bilateral inputs interacted differently for painful stimuli, for which perception remained unchanged, and non-painful stimuli, for which perception increased. This may reflect a ceiling effect for the attentional capture by noxious stimuli and warrants further investigations to examine the regulation of such interactions by bottom-up and top-down processes.
Collapse
|
21
|
Computer-aided diagnosis tool for cervical cancer screening with weakly supervised localization and detection of abnormalities using adaptable and explainable classifier. Med Image Anal 2021; 73:102167. [PMID: 34333217 DOI: 10.1016/j.media.2021.102167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 06/28/2021] [Accepted: 07/07/2021] [Indexed: 01/18/2023]
Abstract
While pap test is the most common diagnosis methods for cervical cancer, their results are highly dependent on the ability of the cytotechnicians to detect abnormal cells on the smears using brightfield microscopy. In this paper, we propose an explainable region classifier in whole slide images that could be used by cyto-pathologists to handle efficiently these big images (100,000x100,000 pixels). We create a dataset that simulates pap smears regions and uses a loss, we call classification under regression constraint, to train an efficient region classifier (about 66.8% accuracy on severity classification, 95.2% accuracy on normal/abnormal classification and 0.870 KAPPA score). We explain how we benefit from this loss to obtain a model focused on sensitivity and, then, we show that it can be used to perform weakly supervised localization (accuracy of 80.4%) of the cell that is mostly responsible for the malignancy of regions of whole slide images. We extend our method to perform a more general detection of abnormal cells (66.1% accuracy) and ensure that at least one abnormal cell will be detected if malignancy is present. Finally, we experiment our solution on a small real clinical slide dataset, highlighting the relevance of our proposed solution, adapting it to be as easily integrated in a pathology laboratory workflow as possible, and extending it to make a slide-level prediction.
Collapse
|
22
|
Abstract
Saliency and visual attention have been studied in a computational context for decades, mostly in the capacity of predicting spatial topographical saliency maps or simulated heatmaps. Spatial selection by an attentive mechanism is, however, inherently a sequential sampling process in humans. There have been recent efforts in analyzing and modeling scanpaths, however, there is as of yet no universal agreement on what metrics should be applied to measure scanpath similarity or the quality of a predicted scanpath from a computational model. Many similarity measures have been suggested in different contexts and little is known about their behavior or properties. This paper presents in one place a review of these metrics, axiomatic analysis of gaze metrics for scanpaths, and careful analysis of the discriminative power of different metrics in order to provide a roadmap for further future analysis. This is accompanied by experimentation based on classic modeling strategies for simulating sequential selection from traditional representations of saliency, and deep neural networks that produce sequences by construction. Experiments provide strong support for the necessity of sequential analysis of attention and support for certain metrics including a family of metrics introduced in this paper motivated by the notion of scanpath plausibility.
Collapse
|
23
|
Disrupted object-scene semantics boost scene recall but diminish object recall in drawings from memory. Mem Cognit 2021; 49:1568-1582. [PMID: 34031795 DOI: 10.3758/s13421-021-01180-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2021] [Indexed: 11/08/2022]
Abstract
Humans are highly sensitive to the statistical relationships between features and objects within visual scenes. Inconsistent objects within scenes (e.g., a mailbox in a bedroom) instantly jump out to us and are known to catch our attention. However, it is debated whether such semantic inconsistencies result in boosted memory for the scene, impaired memory, or have no influence on memory. Here, we examined the relationship of scene-object consistencies on memory representations measured through drawings made during recall. Participants (N = 30) were eye-tracked while studying 12 real-world scene images with an added object that was either semantically consistent or inconsistent. After a 6-minute distractor task, they drew the scenes from memory while pen movements were tracked electronically. Online scorers (N = 1,725) rated each drawing for diagnosticity, object detail, spatial detail, and memory errors. Inconsistent scenes were recalled more frequently, but contained less object detail. Further, inconsistent objects elicited more errors reflecting looser memory binding (e.g., migration across images). These results point to a dual effect in memory of boosted global (scene) but diminished local (object) information. Finally, we observed that participants fixate longest on inconsistent objects, but these fixations during study were not correlated with recall performance, time, or drawing order. In sum, these results show a nuanced effect of scene inconsistencies on memory detail during recall.
Collapse
|
24
|
Psychophysical data to study the brain network mechanisms involved in reorienting attention to salient events during goal-directed visual discrimination and search tasks. Data Brief 2021; 36:107020. [PMID: 33948454 PMCID: PMC8080445 DOI: 10.1016/j.dib.2021.107020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/25/2021] [Accepted: 03/26/2021] [Indexed: 11/22/2022] Open
Abstract
This article presents behavior and EEG dataset collected from 19 healthy human volunteers (10 females) in the age group of 21–29 (mean = 26.9, SD = ±2.15) years at National Brain Research Centre, India during a psychophysical paradigm customized to characterize the brain network interactions during saliency processing. We provide all the raw stimulus files used in developing the experimental paradigm of the linked research article “Organization of directed functional connectivity among nodes of ventral attention network reveals the common network mechanisms underlying saliency processing across distinct spatial and spatio-temporal scales” [1] for replication and use by researchers across various cohorts of the population. Pre-processed EEG time-series segmented into epochs corresponding to three experimental trial conditions, across two visual attention tasks testing the effect of salient distractors on goal-driven tasks are provided. The dataset also includes reaction times corresponding to individual trials. Additionally, structural MRI files corresponding to each individual and 3D EEG sensor locations of all volunteers are provided to assist in accurate source localization. Therefore, the presented dataset will not only facilitate the conventional time resolved EEG analysis like evoked activity and time-frequency analysis at the sensor level but will also facilitate the investigation of source level analysis like global coherence or phase-amplitude coupling within selected regions of the brain.
Collapse
|
25
|
There is no evidence that meaning maps capture semantic information relevant to gaze guidance: Reply to Henderson, Hayes, Peacock, and Rehrig (2021). Cognition 2021; 214:104741. [PMID: 33941376 DOI: 10.1016/j.cognition.2021.104741] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 04/15/2021] [Indexed: 11/17/2022]
Abstract
The concerns raised by Henderson, Hayes, Peacock, and Rehrig (2021) are based on misconceptions of our work. We show that Meaning Maps (MMs) do not predict gaze guidance better than a state-of-the-art saliency model that is based on semantically-neutral, high-level features. We argue that there is therefore no evidence to date that MMs index anything beyond these features. Furthermore, we show that although alterations in meaning cause changes in gaze guidance, MMs fail to capture these alterations. We agree that semantic information is important in the guidance of eye-movements, but the contribution of MMs for understanding its role remains elusive.
Collapse
|
26
|
The effects of perceptual cues on visual statistical learning: Evidence from children and adults. Mem Cognit 2021; 49:1645-1664. [PMID: 33876401 DOI: 10.3758/s13421-021-01179-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2021] [Indexed: 11/08/2022]
Abstract
In visual statistical learning, one can extract the statistical regularities of target locations in an incidental manner. The current study examined the impact of salient perceptual cues on one type of visual statistical learning: probability cueing effects. In a visual search task, the target appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the screen was rotated by 90° and the targets appeared in the four quadrants with equal probabilities. In Experiment 1 without the addition of salient perceptual cues, adults showed significant probability cueing effects, but did not show a persistent attentional bias in the testing phase. In Experiments 2, 3, and 4, salient perceptual cues were added to the rich or the sparse quadrants. Adults showed significant probability cueing effects but no persistent attentional bias. In Experiment 5, younger children, older children, and adults showed significant probability cueing effects. All three groups also showed an attentional gradient phenomenon: reaction times were slower when the targets were in the sparse quadrant diagonal to, rather than adjacent to, the rich quadrant. Furthermore, both children groups showed a persistent egocentric attentional bias in the testing phase. These findings indicated that salient perceptual cues enhanced but did not reduce probability cueing effects, children and adults shared similar basic attentional mechanisms in probability cueing effects, and children and adults showed differences in the persistence of attentional bias.
Collapse
|
27
|
Organization of directed functional connectivity among nodes of ventral attention network reveals the common network mechanisms underlying saliency processing across distinct spatial and spatio-temporal scales. Neuroimage 2021; 231:117869. [PMID: 33607279 DOI: 10.1016/j.neuroimage.2021.117869] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/06/2021] [Accepted: 02/11/2021] [Indexed: 12/20/2022] Open
Abstract
Previous neuroimaging studies have extensively evaluated the structural and functional connectivity of the Ventral Attention Network (VAN) and its role in reorienting attention in the presence of a salient (pop-out) stimulus. However, a detailed understanding of the "directed" functional connectivity within the VAN during the process of reorientation remains elusive. Functional magnetic resonance imaging (fMRI) studies have not adequately addressed this issue due to a lack of appropriate temporal resolution required to capture this dynamic process. The present study investigates the neural changes associated with processing salient distractors operating at a slow and a fast time scale using custom-designed experiment involving visual search on static images and dynamic motion tracking, respectively. We recorded high-density scalp electroencephalography (EEG) from healthy human volunteers, obtained saliency-specific behavioral and spectral changes during the tasks, localized the sources underlying the spectral power modulations with individual-specific structural MRI scans, reconstructed the waveforms of the sources and finally, investigated the causal relationships between the sources using spectral Granger-Geweke Causality (GGC). We found that salient stimuli processing, across tasks with varying spatio-temporal complexities, involves a characteristic modulation in the alpha frequency band which is executed primarily by the nodes of the VAN constituting the temporo-parietal junction (TPJ), the insula and the lateral prefrontal cortex (lPFC). The directed functional connectivity results further revealed the presence of bidirectional interactions among prominent nodes of right-lateralized VAN, corresponding only to the trials with saliency. Thus, our study elucidates the invariant network mechanisms for processing saliency in visual attention tasks across diverse time-scales.
Collapse
|
28
|
Can expectation suppression be explained by reduced attention to predictable stimuli? Neuroimage 2021; 231:117824. [PMID: 33549756 DOI: 10.1016/j.neuroimage.2021.117824] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 01/27/2021] [Accepted: 01/31/2021] [Indexed: 11/23/2022] Open
Abstract
The expectation-suppression effect - reduced stimulus-evoked responses to expected stimuli - is widely considered to be an empirical hallmark of reduced prediction errors in the framework of predictive coding. Here we challenge this notion by proposing that that expectation suppression could be explained by a reduced attention effect. Specifically, we argue that reduced responses to predictable stimuli can also be explained by a reduced saliency-driven allocation of attention. We base our discussion mainly on findings in the visual cortex and propose that resolving this controversy requires the assessment of qualitative differences between the ways in which attention and surprise enhance brain responses.
Collapse
|
29
|
Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations. Cognition 2020; 206:104465. [PMID: 33096374 DOI: 10.1016/j.cognition.2020.104465] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 09/04/2020] [Accepted: 09/08/2020] [Indexed: 11/24/2022]
Abstract
Eye movements are vital for human vision, and it is therefore important to understand how observers decide where to look. Meaning maps (MMs), a technique to capture the distribution of semantic information across an image, have recently been proposed to support the hypothesis that meaning rather than image features guides human gaze. MMs have the potential to be an important tool far beyond eye-movements research. Here, we examine central assumptions underlying MMs. First, we compared the performance of MMs in predicting fixations to saliency models, showing that DeepGaze II - a deep neural network trained to predict fixations based on high-level features rather than meaning - outperforms MMs. Second, we show that whereas human observers respond to changes in meaning induced by manipulating object-context relationships, MMs and DeepGaze II do not. Together, these findings challenge central assumptions underlying the use of MMs to measure the distribution of meaning in images.
Collapse
|
30
|
A saliency-specific and dimension-independent mechanism of distractor suppression. Atten Percept Psychophys 2020; 83:292-307. [PMID: 33025466 PMCID: PMC7538281 DOI: 10.3758/s13414-020-02142-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2020] [Indexed: 11/16/2022]
Abstract
During everyday tasks, salient distractors may capture our attention. Recently, it was shown that through implicit learning, capture by a salient distractor is reduced by suppressing the location where a distractor is likely to appear. In the current study, we presented distractors of different saliency levels at the same specific location, asking the question whether there is always one suppression level for a particular location or whether, for one location, suppression depends on the actual saliency of the distractor appearing at that location. In three experiments, we demonstrate a saliency-specific mechanism of distractor suppression, which can be flexibly modulated by the overall probability of encountering distractors of different saliency levels to optimize behavior in a specific environment. The results also suggest that this mechanism has dimension-independent aspects, given that the saliency-specific suppression pattern is unaffected when saliency signals of distractors are generated by different dimensions. It is argued that suppression is saliency-dependent, implying that suppression is modulated on a trial-by-trial basis contingent on the saliency of the actual distractor presented.
Collapse
|
31
|
The neural basis of feedback-guided behavioral adjustment. Neurosci Lett 2020; 736:135243. [PMID: 32726592 DOI: 10.1016/j.neulet.2020.135243] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Revised: 06/25/2020] [Accepted: 07/07/2020] [Indexed: 11/20/2022]
Abstract
Given feedback on the outcomes of our choices, humans can then make adjustments to future decisions. This is how we learn. However, how knowing the outcome of one's decisions influences behavioral changes, and especially the neural basis of those behavioral changes, still remains unclear. To investigate these questions, we employed a simple gambling task, in which participants chose between two alternative cards and received trial-by-trial feedback of their choices. In different sessions, we emphasized either utility (win or loss) or performance (whether the choice was correct [better than the alternative] or incorrect), making one of the two aspects more salient to participants. We found that trial-by-trial feedback and the saliency of the feedback modulated behavioral adjustments and subjective evaluations of the outcomes. With simultaneous electroencephalogram (EEG) recording, we found that the feedback-related negativity (FRN), P300, and late positive potential (LPP) served as the neural substrates for behavioral decision switching. Together, our findings reveal the neural basis of behavioral adjustment based on outcome evaluation and highlight the key role of feedback evaluation in future action selection and flexible adaptation.
Collapse
|
32
|
Watchers do not follow the eye movements of Walkers. Vision Res 2020; 176:130-140. [PMID: 32882595 DOI: 10.1016/j.visres.2020.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 08/03/2020] [Accepted: 08/05/2020] [Indexed: 11/27/2022]
Abstract
Eye movements are a functional signature of how the visual system effectively decodes and adapts to the environment. However, scientific knowledge in eye movements mostly arises from studies conducted in laboratories, with well-controlled stimuli presented in constrained unnatural settings. Only a few studies have attempted to directly compare and assess whether eye movement data acquired in the real world generalize with those in laboratory settings, with same visual inputs. However, none of these studies controlled for both the auditory signals typical of real-world settings and the top-down task effects across conditions, leaving this question unresolved. To minimize this inherent gap across conditions, we compared the eye movements recorded from observers during ecological spatial navigation in the wild (the Walkers) with those recorded in laboratory (the Watchers) on the same visual and auditory inputs, with both groups performing the very same active cognitive task. We derived robust data-driven statistical saliency and motion maps. The Walkers and Watchers differed in terms of eye movement characteristics: fixation number and duration, saccade amplitude. The Watchers relied significantly more on saliency and motion than the Walkers. Interestingly, both groups exhibited similar fixation patterns towards social agents and objects. Altogether, our data show that eye movements patterns obtained in laboratory do not fully generalize to real world, even when task and auditory information is controlled. These observations invite to caution when generalizing the eye movements obtained in laboratory with those of ecological spatial navigation.
Collapse
|
33
|
Abstract
Ensemble statistics are often thought of as a reliable impression of numerous items despite limited capacities to consciously represent each individual. However, whether all items equally contribute to ensemble summaries (e.g., mean) and whether they might be affected by known limited-capacity processes, such as focused attention, is still debated. We addressed these questions via a recently described "amplification effect," a systematic bias of perceived mean (e.g., average size) towards the more salient "tail" of a feature distribution (e.g., larger items). In our experiments, observers adjusted the mean orientation of sets of items varying in set size. We made some of the items more salient or less salient by changing their size. While the whole orientation distribution was fixed, the more salient subset could be shifted relative to the set mean or differ in range. We measured the bias away from the set mean and the standard deviation (SD) of errors, as it is known to reflect the physical range from which ensemble information is sampled. We found that bias and SD changes followed the shifts and range changes in salient subsets, providing evidence for amplification. However, these changes were weaker than those expected from sampling only salient items, suggesting that less salient items were also sampled. Importantly, the SD decreased as a function of set size, which is only possible if the number of sampled elements increased with set size. Overall, we conclude that orientation summary statistics are sampled from an entire ensemble and modulated by the amplification effect of attention.
Collapse
|
34
|
Neural mechanisms underlying concurrent listening of simultaneous speech. Brain Res 2020; 1738:146821. [PMID: 32259518 DOI: 10.1016/j.brainres.2020.146821] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 03/31/2020] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
Can we identify what two people are saying at the same time? Although it is difficult to perfectly repeat two or more simultaneous messages, listeners can report information from both speakers. In a concurrent/divided listening task, enhanced attention and segregation of speech can be required rather than selection and suppression. However, the neural mechanisms of concurrent listening to multi-speaker concurrent speech has yet to be clarified. The present study utilized functional magnetic resonance imaging to examine the neural responses of healthy young adults listening to concurrent male and female speakers in an attempt to reveal the mechanism of concurrent listening. After practice and multiple trials testing concurrent listening, 31 participants achieved performance comparable with that of selective listening. Furthermore, compared to selective listening, concurrent listening induced greater activation in the anterior cingulate cortex, bilateral anterior insula, frontoparietal regions, and the periaqueductal gray region. In addition to the salience network for multi-speaker listening, attentional modulation and enhanced segregation of these signals could be used to achieve successful concurrent listening. These results indicate the presence of a potential mechanism by which one can listen to two voices with enhanced attention to saliency signals.
Collapse
|
35
|
Center bias outperforms image salience but not semantics in accounting for attention during scene viewing. Atten Percept Psychophys 2020; 82:985-994. [PMID: 31456175 DOI: 10.3758/s13414-019-01849-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is 'pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743-747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.
Collapse
|
36
|
A 100,000-to-1 high dynamic range (HDR) luminance display for investigating visual perception under real-world luminance dynamics. J Neurosci Methods 2020; 338:108684. [PMID: 32169585 DOI: 10.1016/j.jneumeth.2020.108684] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 03/09/2020] [Accepted: 03/09/2020] [Indexed: 11/21/2022]
Abstract
BACKGROUND Real-world illumination challenges both autonomous sensing and displays, because scene luminance can vary by up to 109-to-1, whereas vision models have limited ability to generalize beyond 100-to-1 luminance contrast. Brain mechanisms automatically normalize the visual input based on feature context, but they remain poorly understood because of the limitations of commercially available displays. NEW METHOD Here, we describe procedures for setup, calibration, and precision check of an HDR display system, based on a JVC DLA-RS600U reference projector, with over 100,000-to-1 luminance dynamic range (636-0.006055 cd/m2), pseudo 11 bit grayscale precision, and 3 ms temporal precision in the MATLAB/Psychtoolbox software environment. The setup is synchronized with electroencephalography (EEG) and infrared eye-tracking measurements. RESULTS We show display metrics including light scatter versus average display luminance (ADL), spatial uniformity, and spatial uniformity at high spatial frequency. We also show a luminance normalization phenomenon, contextual facilitation of a high contrast target, whose discovery required HDR display. COMPARISON WITH EXISTING METHODS This system provides 100-fold greater dynamic range than standard 1000-to-1 contrast displays and increases the number of gray levels from 256 or 1024 (8 or 10 bits) to 2048 (pseudo 11 bits), enabling the study of mesopic-to-photopic vision, at the expense of spatial non-uniformities. CONCLUSIONS This HDR research capability opens new questions of how visual perception is resilient to real-world luminance dynamics and will lead to improved visual modeling of dense urban and forest environments and of mixed indoor-outdoor environments such as cockpits and augmented reality. Our display metrics code can be found at https://github.com/USArmyResearchLab/ARL-Display-Metrics-and-Average-Display-Luminance.
Collapse
|
37
|
Novelty competes with saliency for attention. Vision Res 2020; 168:42-52. [PMID: 32088400 DOI: 10.1016/j.visres.2020.01.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 11/28/2019] [Accepted: 01/07/2020] [Indexed: 11/29/2022]
Abstract
A highly debated question in attention research is to what extent attention is biased by bottom-up factors such as saliency versus top-down factors as governed by the task. Visual search experiments in which participants are briefly familiarized with the task and then see a novel stimulus unannounced and for the first time support yet another factor, showing that novel and surprising features attract attention. In the present study, we tested whether gaze behavior as an indicator for attentional prioritization can be predicted accurately within displays containing both salient and novel stimuli by means of a priority map that assumes novelty as an additional source of activation. To that aim, we conducted a visual search experiment where a color singleton was presented for the first time in the surprise trial and manipulated the color-novelty of the remaining non-singletons between participants. In one group, the singleton was the only novel stimulus ("one-new"), whereas in another group, the non-singleton stimuli were likewise novel ("all-new"). The surprise trial was always target absent and designed such that top-down prioritization of any color was unlikely. The results show that the singleton in the all-new group captured the gaze less strongly, with more early fixations being directed to the novel non-singletons. Overall, the fixation pattern can accurately be explained by noisy priority maps where saliency and novelty compete for gaze control.
Collapse
|
38
|
Enumeration strategy differences revealed by saccade-terminated eye tracking. Cognition 2020; 198:104204. [PMID: 32014714 DOI: 10.1016/j.cognition.2020.104204] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Revised: 01/20/2020] [Accepted: 01/24/2020] [Indexed: 12/25/2022]
Abstract
Brain regions involved in saccadic eye movements partially overlap with a frontoparietal network implicated in encoding numerosities. Eye movement patterns may plausibly reflect strategic scanning behaviours to resolve the open-ended task of efficiently enumerating visual arrays. If so, these patterns may help explain individual differences in enumeration acuity in terms of well-understood visual attention mechanisms. Most enumeration eye-tracking paradigms, however, do not allow for direct manipulation of eye movement behaviours to test these claims. In the current study we terminated trials after a specified number of saccades to systematically probe the time course of enumeration strategies. Fifteen adults (11 naïve, 4 informed) enumerated random dot arrays under three conditions: (1) a novel saccade-terminated design where arrays were visible until one, two or four saccades had occurred; (2) a duration-terminated design where arrays were shown for 250, 500 or 1000 ms; and (3) a response-terminated design where arrays were visible until a response. Participants gave more accurate responses when enumerating saccade-terminated trials despite taking a similar time as in the duration-terminated trials. When participants were informed how trials would terminate, their saccade onset latencies shifted to match task demands. Rotating saccade vectors to align with salient image locations accounted for variability in the orientation of saccade trajectories. These findings (1) show a combination of stimulus-derived visual processing and task-based strategic demands account for enumeration eye movements patterns, (2) validate a novel saccade-contingent trial termination procedure for studying sequences of enumeration eye movements, and (3) highlight the need to include analyses of spatial and temporal eye movement patterns into models of visual enumeration strategies.
Collapse
|
39
|
Abstract
OBJECTIVES The authors investigated the topography of cholinergic vulnerability in patients with dementia with Lewy bodies (DLB) using positron emission tomography (PET) imaging with the vesicular acetylcholine transporter (VAChT) [18F]-fluoroethoxybenzovesamicol ([18F]-FEOBV) radioligand. METHODS Five elderly participants with DLB (mean age, 77.8 years [SD=4.2]) and 21 elderly healthy control subjects (mean age, 73.62 years [SD=8.37]) underwent clinical assessment and [18F]-FEOBV PET. RESULTS Compared with the healthy control group, reduced VAChT binding in patients with DLB demonstrated nondiffuse regionally distinct and prominent reductions in bilateral opercula and anterior cingulate to mid-cingulate cortices, bilateral insula, right (more than left) lateral geniculate nuclei, pulvinar, right proximal optic radiation, bilateral anterior and superior thalami, and posterior hippocampal fimbria and fornices. CONCLUSIONS The topography of cholinergic vulnerability in DLB comprises key neural hubs involved in tonic alertness (cingulo-opercular), saliency (insula), visual attention (visual thalamus), and spatial navigation (fimbria/fornix) networks. The distinct denervation pattern suggests an important cholinergic role in specific clinical disease-defining features, such as cognitive fluctuations, visuoperceptual abnormalities causing visual hallucinations, visuospatial changes, and loss of balance caused by DLB.
Collapse
|
40
|
Perceptual load modulates contour integration in conscious and unconscious states. PeerJ 2019; 7:e7550. [PMID: 31497404 PMCID: PMC6708573 DOI: 10.7717/peerj.7550] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Accepted: 07/25/2019] [Indexed: 12/19/2022] Open
Abstract
Previous research has documented that contour detection and integration may either be affected by local features such as the distances between elements or by high-level cognitive factors such as attention in our visual system. Less is known about how low and high level factors interact to influence contour integration. In this paper, we investigated how attention modulates contour integration through saliency (different element spacing) and topological propert ies (circle or S-shaped) when the state of conscious awareness is manipulated. A modified inattentional blindness (IB) combined with the Posner cuing paradigm was adopted in our three-phased experiment (unconscious-training-conscious). Attention was manipulated with high or low perceptual load for a foveal go/no-go task. Cuing effects were utilized to assess the covert processing of contours prior to a peripheral orientation discrimination task. We found that (1) salient circles and S-contours induced different cuing effects under low perceptual load but not with high load; (2) no consistent pattern of cuing effects was found for non-salient contours in all the conditions; (3) a positive cuing effect was observed for salient circles either consciously or unconsciously while a negative cuing effect occurred for salient S-contours only consciously. These results suggest that conscious awareness plays a pivotal role in coordinating a closure effect with the level of perceptual load. Only salient circles can be successfully integrated in an unconscious state under low perceptual load although both salient circles and S-contours can be done consciously. Our findings support a bi-directional mechanism that low-level sensory features interact with high-level cognitive factors in contour integration.
Collapse
|
41
|
A unified computational framework for visual attention dynamics. PROGRESS IN BRAIN RESEARCH 2019; 249:183-188. [PMID: 31325977 DOI: 10.1016/bs.pbr.2019.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Eye movements are an essential part of human vision as they drive the fovea and, consequently, selective visual attention toward a region of interest in space. Free visual exploration is an inherently stochastic process depending on image statistics but also individual variability of cognitive and attentive state. We propose a theory of free visual exploration entirely formulated within the framework of physics and based on the general Principle of Least Action. Within this framework, differential laws describing eye movements emerge in accordance with bottom-up functional principles. In addition, we integrate top-down semantic information captured by deep convolutional neural networks pre-trained for the classification of common objects. To stress the model, we used a wide collection of images including basic features as well as high level semantic content. Results in a task of saliency prediction validate the theory.
Collapse
|
42
|
The role of meaning in attentional guidance during free viewing of real-world scenes. Acta Psychol (Amst) 2019; 198:102889. [PMID: 31302302 DOI: 10.1016/j.actpsy.2019.102889] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 06/27/2019] [Accepted: 07/05/2019] [Indexed: 10/26/2022] Open
Abstract
In real-world vision, humans prioritize the most relevant visual information at the expense of other information via attentional selection. The current study sought to understand the role of semantic features and image features on attentional selection during free viewing of real-world scenes. We compared the ability of meaning maps generated from ratings of isolated, context-free image patches and saliency maps generated from the Graph-Based Visual Saliency model to predict the spatial distribution of attention in scenes as measured by eye movements. Additionally, we introduce new contextualized meaning maps in which scene patches were rated based upon how informative or recognizable they were in the context of the scene from which they derived. We found that both context-free and contextualized meaning explained significantly more of the overall variance in the spatial distribution of attention than image salience. Furthermore, meaning explained early attention to a significantly greater extent than image salience, contrary to predictions of the 'saliency first' hypothesis. Finally, both context-free and contextualized meaning predicted attention equivalently. These results support theories in which meaning plays a dominant role in attentional guidance during free viewing of real-world scenes.
Collapse
|
43
|
Free visual exploration of natural movies in schizophrenia. Eur Arch Psychiatry Clin Neurosci 2019; 269:407-418. [PMID: 29305645 DOI: 10.1007/s00406-017-0863-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Accepted: 12/28/2017] [Indexed: 12/22/2022]
Abstract
BACKGROUND Eye tracking dysfunction (ETD) observed with standard pursuit stimuli represents a well-established biomarker for schizophrenia. How ETD may manifest during free visual exploration of real-life movies is unclear. METHODS Eye movements were recorded (EyeLink®1000) while 26 schizophrenia patients and 25 healthy age-matched controls freely explored nine uncut movies and nine pictures of real-life situations for 20 s each. Subsequently, participants were shown still shots of these scenes to decide whether they had explored them as movies or pictures. Participants were additionally assessed on standard eye-tracking tasks. RESULTS Patients made smaller saccades (movies (p = 0.003), pictures (p = 0.002)) and had a stronger central bias (movies and pictures (p < 0.001)) than controls. In movies, patients' exploration behavior was less driven by image-defined, bottom-up stimulus saliency than controls (p < 0.05). Proportions of pursuit tracking on movies differed between groups depending on the individual movie (group*movie p = 0.011, movie p < 0.001). Eye velocity on standard pursuit stimuli was reduced in patients (p = 0.029) but did not correlate with pursuit behavior on movies. Additionally, patients obtained lower rates of correctly identified still shots as movies or pictures (p = 0.046). CONCLUSION Our results suggest a restricted centrally focused visual exploration behavior in patients not only on pictures, but also on movies of real-life scenes. While ETD observed in the laboratory cannot be directly transferred to natural viewing conditions, these alterations support a model of impairments in motion information processing in patients resulting in a reduced ability to perceive moving objects and less saliency driven exploration behavior presumably contributing to alterations in the perception of the natural environment.
Collapse
|
44
|
Unique objects attract attention even when faint. Vision Res 2019; 160:60-71. [PMID: 31047908 DOI: 10.1016/j.visres.2019.04.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2018] [Revised: 04/11/2019] [Accepted: 04/14/2019] [Indexed: 11/20/2022]
Abstract
Locally contrasting objects, e.g. a red apple surrounded by green apples, attract attention. Does this generalize to differences in feature space? That is, do unique objects-regardless of their location-stand out from a collection of objects that are similar to one another, even when the unique object has lower local contrast with the background than the other objects? Behavioral data show indeed a preference for unique items but previous experiments enabled viewers to anticipate what response they were "supposed" to give. We developed a new experimental paradigm that minimizes such top-down effects. Pitting local contrast against global uniqueness, we show that unique stimuli attract attention even in not-anticipated, never-seen images, and even when the unique stimuli are faint (low contrast). A computational model explains how competition between objects in feature space favors dissimilar objects over those with similar features. The model explains how humans select unique objects, without a loss of performance on natural scenes.
Collapse
|
45
|
Abstract
In this study, we introduced familiarity-related inducer items (expressions referring to the participant’s self-related, familiar details: “mine,” “familiar”; and expressions referring to other, unfamiliar details, e.g., “other,” “irrelevant”) to the Complex Trial Protocol version of the P300-based Concealed Information Test (CIT), at the same time using different item categories with various levels of personal importance to the participants (forenames, birthdays, favorite animals). The inclusion of inducers did not significantly improve the overall efficiency of the method as we would have expected considering that these inducers should increase awareness of the denial of the recognition of the probes (the true details of the participants), and hence the subjective saliency of the items (Lukács in J Appl Res Mem Cognit, 6:283–284, 2017a). This may be explained by the visual similarity of inducers to the probe and irrelevant items and the consequent distracting influence of inducers on probe-task performance. On the other hand, the CIT effect (probe-irrelevant P300 differences) was always lower for less personally important (low-salient) and higher for more personally important (high-salient) items.
Collapse
|
46
|
Abstract
It has long been known from animal literature that the locus coeruleus (LC), the source region of noradrenergic neurons in the brain, is sensitive to unexpected, novel, and other salient events. In humans, however, direct assessment of LC activity has proven to be challenging due to its small size and difficult localization, which is why noradrenergic activity has often been assessed using more indirect measures such as electroencephalography (EEG) and pupil recordings. Here, we combined high-resolution functional magnetic resonance imaging (fMRI) with a special anatomical sequence to assess neural activity in the LC in response to different types of salient stimuli in an oddball paradigm (novel neutral oddballs, novel emotional oddballs, and familiar target oddballs). We found a significant linear increase of LC activity from standard trials, over familiar target oddballs, to novel neutral and novel emotional oddballs. Importantly, when breaking down this linear trend, only novel oddball stimuli led to robust activity increases as compared to standard trials, with no statistical difference between neutral and emotional ones. This pattern suggests that activity modulations in the LC in the present study were mainly driven by stimulus novelty, rather than by emotional saliency, task relevance, or contextual novelty alone. Moreover, the absence of significant activity modulations in response to target oddballs (which were reported in a recent study) suggests that the LC represents relative rather than absolute saliency of a stimulus in its respective context.
Collapse
|
47
|
Psychophysical evaluation of individual low-level feature influences on visual attention. Vision Res 2018; 154:60-79. [PMID: 30408434 DOI: 10.1016/j.visres.2018.10.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2018] [Revised: 10/23/2018] [Accepted: 10/26/2018] [Indexed: 11/16/2022]
Abstract
In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.
Collapse
|
48
|
Saliency modulates behavioral strategies in response to social comparison. Acta Psychol (Amst) 2018; 190:239-247. [PMID: 30149238 DOI: 10.1016/j.actpsy.2018.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2018] [Revised: 08/10/2018] [Accepted: 08/22/2018] [Indexed: 11/19/2022] Open
Abstract
Social comparison has been found to affect humans in many aspects including outcome evaluation, emotional reaction, and decision-making. Here, two experiments were conducted using a gambling task involving monetary gains and losses (absolute outcome: win/loss), whereby participants' outcome was either better or worse than the outcome of a paired player (relative outcome: better/worse). The results of experiment 1 showed that participants switched more frequently after absolute losses compared with absolute gains, consistent with previous studies showing a win-stay lose-shift heuristic in repeated decision-making. Participants also adopted a better-stay worse-switch strategy where they switched more often after worse outcomes than better outcomes when compared with others, demonstrating that the win-stay lose-shift rule is extended to social comparison situations. In Experiment 2, through manipulating visual saliency, we replicated these findings and further demonstrated that decision making is influenced by emphasizing either the absolute (gain/loss) or relative (better/worse) aspect of the outcomes. Our research indicates that attentional modulation of information orchestrates social comparison, possibly by changing how each aspect of the information is weighted. These findings reinforce the idea that attention influences higher-level decision making by changing the weighting of each decisional dimension.
Collapse
|
49
|
Semantic content outweighs low-level saliency in determining children's and adults' fixation of movies. J Exp Child Psychol 2018; 166:293-309. [PMID: 28972928 PMCID: PMC5710995 DOI: 10.1016/j.jecp.2017.09.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 08/21/2017] [Accepted: 09/05/2017] [Indexed: 12/01/2022]
Abstract
To make sense of the visual world, we need to move our eyes to focus regions of interest on the high-resolution fovea. Eye movements, therefore, give us a way to infer mechanisms of visual processing and attention allocation. Here, we examined age-related differences in visual processing by recording eye movements from 37 children (aged 6-14years) and 10 adults while viewing three 5-min dynamic video clips taken from child-friendly movies. The data were analyzed in two complementary ways: (a) gaze based and (b) content based. First, similarity of scanpaths within and across age groups was examined using three different measures of variance (dispersion, clusters, and distance from center). Second, content-based models of fixation were compared to determine which of these provided the best account of our dynamic data. We found that the variance in eye movements decreased as a function of age, suggesting common attentional orienting. Comparison of the different models revealed that a model that relies on faces generally performed better than the other models tested, even for the youngest age group (<10years). However, the best predictor of a given participant's eye movements was the average of all other participants' eye movements both within the same age group and in different age groups. These findings have implications for understanding how children attend to visual information and highlight similarities in viewing strategies across development.
Collapse
|
50
|
Saliency modulates affective evaluations but not behavioral responses in the ultimatum game. Acta Psychol (Amst) 2018; 183:99-107. [PMID: 29331200 DOI: 10.1016/j.actpsy.2018.01.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 12/26/2017] [Accepted: 01/09/2018] [Indexed: 11/23/2022] Open
Abstract
Although numerous studies have demonstrated that the saliency of perceptual information guides attention, the effect of perceptual saliency in high-level social situations remains unclear. Here, in a modified ultimatum game that included both gain and loss sharing, we highlighted either the fairness (fair or unfair) or the valence (gain or loss) aspect of a proposed offer using salient background colors with social meanings. The results showed that emotional responses to proposed offers were influenced by visual saliency. Specifically, individuals felt more dissatisfied about unfair (as opposed to fair) offers when fairness was emphasized than when valence was emphasized or no emphasis; and similarly, individuals felt more dissatisfied about loss situations compared to gain situations when valence was emphasized than when fairness was emphasized or no emphasis. However, this attentional modulation of social information led to changes only on affective responses but not on actual behavioral responses. Our findings indicate that attentional modulation of social information has a profound impact on affective evaluation by changing how information is weighed.
Collapse
|