1
|
Ancora LA, Blanco-Mora DA, Alves I, Bonifácio A, Morgado P, Miranda B. Cities and neuroscience research: A systematic literature review. Front Psychiatry 2022; 13:983352. [PMID: 36440407 PMCID: PMC9684645 DOI: 10.3389/fpsyt.2022.983352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/21/2022] [Indexed: 11/11/2022] Open
Abstract
Background Cities are becoming the socio-economic hubs for most of the world's population. Understanding how our surroundings can mentally affect everyday life has become crucial to integrate environmental sustainability into urban development. The present review aims to explore the empirical studies investigating neural mechanisms underlying cognitive and emotional processes elicited by the exposure to different urban built and natural spaces. It also tries to identify new research questions and to leverage neurourbanism as a framework to achieve healthier and sustainable cities. Methods By following the PRISMA framework, we conducted a structured search on PubMed, ProQuest, Web of Science, and Scopus databases. Only articles related to how urban environment-built or natural-affects brain activity through objective measurement (with either imaging or electrophysiological techniques) were considered. Further inclusion criteria were studies on human adult populations, peer-reviewed, and in English language. Results Sixty-two articles met the inclusion criteria. They were qualitatively assessed and analyzed to determine the main findings and emerging concepts. Overall, the results suggest that urban built exposure (when compared to natural spaces) elicit activations in brain regions or networks strongly related to perceptual, attentional, and (spatial) cognitive demands. The city's-built environment also triggers neural circuits linked to stress and negative affect. Convergence of these findings was observed across neuroscience techniques, and for both laboratory and real-life settings. Additionally, evidence also showed associations between neural social stress processing with urban upbringing or current city living-suggesting a mechanistic link to certain mood and anxiety disorders. Finally, environmental diversity was found to be critical for positive affect and individual well-being. Conclusion Contemporary human-environment interactions and planetary challenges imply greater understanding of the neurological underpinnings on how the urban space affects cognition and emotion. This review provides scientific evidence that could be applied for policy making on improved urban mental health. Several studies showed that high-quality green or blue spaces, and bio-diverse urban areas, are important allies for positive neural, cognitive, and emotional processes. Nonetheless, the spatial perception in social contexts (e.g., city overcrowding) deserves further attention by urban planners and scientists. The implications of these observations for some theories in environmental psychology and research are discussed. Future work should take advantage of technological advancements to better characterize behavior, brain physiology, and environmental factors and apply them to the remaining complexity of contemporary cities.
Collapse
Affiliation(s)
- Leonardo A. Ancora
- Institute of Physiology, Lisbon School of Medicine, University of Lisbon, Lisbon, Portugal
| | | | - Inês Alves
- Institute of Molecular Medicine, Lisbon School of Medicine, University of Lisbon, Lisbon, Portugal
| | - Ana Bonifácio
- Centre of Geographical Studies, Institute of Geography and Spatial Planning, University of Lisbon, Lisbon, Portugal
| | - Paulo Morgado
- Centre of Geographical Studies, Institute of Geography and Spatial Planning, University of Lisbon, Lisbon, Portugal
| | - Bruno Miranda
- Institute of Physiology, Lisbon School of Medicine, University of Lisbon, Lisbon, Portugal
- Institute of Molecular Medicine, Lisbon School of Medicine, University of Lisbon, Lisbon, Portugal
| |
Collapse
|
2
|
Fujimoto K, Hayashi K, Katayama R, Lee S, Liang Z, Yoshida W, Ishii S. Deep learning-based image deconstruction method with maintained saliency. Neural Netw 2022; 155:224-241. [PMID: 36081196 DOI: 10.1016/j.neunet.2022.08.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 06/30/2022] [Accepted: 08/12/2022] [Indexed: 11/22/2022]
Abstract
Visual properties that primarily attract bottom-up attention are collectively referred to as saliency. In this study, to understand the neural activity involved in top-down and bottom-up visual attention, we aim to prepare pairs of natural and unnatural images with common saliency. For this purpose, we propose an image transformation method based on deep neural networks that can generate new images while maintaining the consistent feature map, in particular the saliency map. This is an ill-posed problem because the transformation from an image to its corresponding feature map could be many-to-one, and in our particular case, the various images would share the same saliency map. Although stochastic image generation has the potential to solve such ill-posed problems, the most existing methods focus on adding diversity of the overall style/touch information while maintaining the naturalness of the generated images. To this end, we developed a new image transformation method that incorporates higher-dimensional latent variables so that the generated images appear unnatural with less context information but retain a high diversity of local image structures. Although such high-dimensional latent spaces are prone to collapse, we proposed a new regularization based on Kullback-Leibler divergence to avoid collapsing the latent distribution. We also conducted human experiments using our newly prepared natural and corresponding unnatural images to measure overt eye movements and functional magnetic resonance imaging, and found that those images induced distinctive neural activities related to top-down and bottom-up attentional processing.
Collapse
Affiliation(s)
- Keisuke Fujimoto
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Kojiro Hayashi
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Risa Katayama
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Sehyung Lee
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Zhen Liang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China; Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
| | - Wako Yoshida
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Shin Ishii
- Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan; ATR Neural Information Analysis Laboratories, Kyoto 619-0288, Japan.
| |
Collapse
|
3
|
Nuthmann A, Clayden AC, Fisher RB. The effect of target salience and size in visual search within naturalistic scenes under degraded vision. J Vis 2021; 21:2. [PMID: 33792616 PMCID: PMC8024777 DOI: 10.1167/jov.21.4.2] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We address two questions concerning eye guidance during visual search in naturalistic scenes. First, search has been described as a task in which visual salience is unimportant. Here, we revisit this question by using a letter-in-scene search task that minimizes any confounding effects that may arise from scene guidance. Second, we investigate how important the different regions of the visual field are for different subprocesses of search (target localization, verification). In Experiment 1, we manipulated both the salience (low vs. high) and the size (small vs. large) of the target letter (a "T"), and we implemented a foveal scotoma (radius: 1°) in half of the trials. In Experiment 2, observers searched for high- and low-salience targets either with full vision or with a central or peripheral scotoma (radius: 2.5°). In both experiments, we found main effects of salience with better performance for high-salience targets. In Experiment 1, search was faster for large than for small targets, and high-salience helped more for small targets. When searching with a foveal scotoma, performance was relatively unimpaired regardless of the target's salience and size. In Experiment 2, both visual-field manipulations led to search time costs, but the peripheral scotoma was much more detrimental than the central scotoma. Peripheral vision proved to be important for target localization, and central vision for target verification. Salience affected eye movement guidance to the target in both central and peripheral vision. Collectively, the results lend support for search models that incorporate salience for predicting eye-movement behavior.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, University of Kiel, Germany.,Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK., http://orcid.org/0000-0003-3338-3434
| | - Adam C Clayden
- School of Engineering, Arts, Science and Technology, University of Suffolk, UK.,Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, UK.,
| | | |
Collapse
|
4
|
Schultz BG, Brown RM, Kotz SA. Dynamic acoustic salience evokes motor responses. Cortex 2020; 134:320-332. [PMID: 33340879 DOI: 10.1016/j.cortex.2020.10.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 06/25/2020] [Accepted: 10/08/2020] [Indexed: 11/28/2022]
Abstract
Audio-motor integration is currently viewed as a predictive process in which the brain simulates upcoming sounds based on voluntary actions. This perspective does not consider how our auditory environment may trigger involuntary action in the absence of prediction. We address this issue by examining the relationship between acoustic salience and involuntary motor responses. We investigate how acoustic features in music contribute to the perception of salience, and whether those features trigger involuntary peripheral motor responses. Participants with little-to-no musical training listened to musical excerpts once while remaining still during the recording of their muscle activity with surface electromyography (sEMG), and again while they continuously rated perceived salience within the music using a slider. We show cross-correlations between 1) salience ratings and acoustic features, 2) acoustic features and spontaneous muscle activity, and 3) salience ratings and spontaneous muscle activity. Amplitude, intensity, and spectral centroid were perceived as the most salient features in music, and fluctuations in these features evoked involuntary peripheral muscle responses. Our results suggest an involuntary mechanism for audio-motor integration, which may rely on brainstem-spinal or brainstem-cerebellar-spinal pathways. Based on these results, we argue that a new framework is needed to explain the full range of human sensorimotor capabilities. This goal can be achieved by considering how predictive and reactive audio-motor integration mechanisms could operate independently or interactively to optimize human behavior.
Collapse
Affiliation(s)
- Benjamin G Schultz
- Basic & Applied NeuroDynamics Laboratory, Faculty of Psychology & Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, the Netherlands
| | - Rachel M Brown
- Basic & Applied NeuroDynamics Laboratory, Faculty of Psychology & Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, the Netherlands
| | - Sonja A Kotz
- Basic & Applied NeuroDynamics Laboratory, Faculty of Psychology & Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, the Netherlands.
| |
Collapse
|
5
|
Guillermo S, Correll J. Beyond stereotypes: The complexity of attention to racial out‐group faces. JOURNAL OF THEORETICAL SOCIAL PSYCHOLOGY 2020. [DOI: 10.1002/jts5.58] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
| | - Joshua Correll
- Psychology and Neuroscience University of Colorado Boulder CO USA
| |
Collapse
|
6
|
Van de Weijgert M, Van der Burg E, Donk M. Attentional guidance varies with display density. Vision Res 2019; 164:1-11. [PMID: 31401217 DOI: 10.1016/j.visres.2019.08.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/23/2019] [Accepted: 08/01/2019] [Indexed: 10/26/2022]
Abstract
The aim of the present study was to investigate how display density affects attentional guidance in heterogeneous search displays. In Experiment 1 we presented observers with heterogeneous sparse and dense search displays which were adaptively changed over the course of the experiment using genetic algorithms. We generated random displays, and based upon fastest search times, the displays that allowed most efficient search were selected to generate new displays for the next generations, thus revealing which properties facilitated or inhibited target search across display densities. The results showed that the prevalence of distractors sharing the target color was substantially reduced over generations in sparse displays. Dense displays also evolved to contain less distractors sharing the target color but only when the orientation of the distractors resembled the target orientation. More importantly, spatial analyses revealed that changes across generations occurred across all areas in sparse displays but were confined to occur around the target location only in dense displays. In Experiment 2, in which we used a factorial design, we showed that the presence of potentially interfering distractors in the target area affected search in dense displays but not in sparse displays. Together the results suggest that the role of salience-driven attentional guidance is larger in dense than sparse displays even in the absence of display homogeneity.
Collapse
Affiliation(s)
- Marlies Van de Weijgert
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Faculty of Engineering, Design and Computing, Inholland University of Applied Sciences, Delft, the Netherlands.
| | - Erik Van der Burg
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; School of Psychology, University of Sydney, Sydney, Australia
| | - Mieke Donk
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
7
|
Neale C, Aspinall P, Roe J, Tilley S, Mavros P, Cinderby S, Coyne R, Thin N, Ward Thompson C. The impact of walking in different urban environments on brain activity in older people. ACTA ACUST UNITED AC 2019. [DOI: 10.1080/23748834.2019.1619893] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Chris Neale
- Stockholm Environment Institute, Environment Department, University of York, York, England
- Frank Batten School of Leadership and Public Policy, University of Virginia, Charlottesville, VA, USA
| | - Peter Aspinall
- School of Built Environment, Heriot-Watt University, Edinburgh, Scotland
| | - Jenny Roe
- Center for Design and Health, School of Architecture, University of Virginia, Charlottesville, VA, USA
| | - Sara Tilley
- OPENspace Research Centre, Edinburgh College of Art, University of Edinburgh, Edinburgh, Scotland
| | - Panagiotis Mavros
- Future Cities Laboratory, Singapore-ETH Centre, ETH, Zürich, Singapore
| | - Steve Cinderby
- Stockholm Environment Institute, Environment Department, University of York, York, England
| | - Richard Coyne
- Edinburgh School of Architecture and Landscape Architecture, University of Edinburgh, Edinburgh, Scotland
| | - Neil Thin
- School of Social and Political Science, University of Edinburgh, Edinburgh, Scotland
| | - Catharine Ward Thompson
- OPENspace Research Centre, Edinburgh College of Art, University of Edinburgh, Edinburgh, Scotland
| |
Collapse
|
8
|
Neale C, Aspinall P, Roe J, Tilley S, Mavros P, Cinderby S, Coyne R, Thin N, Bennett G, Thompson CW. The Aging Urban Brain: Analyzing Outdoor Physical Activity Using the Emotiv Affectiv Suite in Older People. J Urban Health 2017; 94:869-880. [PMID: 28895027 PMCID: PMC5722728 DOI: 10.1007/s11524-017-0191-9] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This research directly assesses older people's neural activation in response to a changing urban environment while walking, as measured by electroencephalography (EEG). The study builds on previous research that shows changes in cortical activity while moving through different urban settings. The current study extends this methodology to explore previously unstudied outcomes in older people aged 65 years or more (n = 95). Participants were recruited to walk one of six scenarios pairing urban busy (a commercial street with traffic), urban quiet (a residential street) and urban green (a public park) spaces in a counterbalanced design, wearing a mobile Emotiv EEG headset to record real-time neural responses to place. Each walk lasted around 15 min and was undertaken at the pace of the participant. We report on the outputs for these responses derived from the Emotiv Affectiv Suite software, which creates emotional parameters ('excitement', 'frustration', 'engagement' and 'meditation') with a real-time value assigned to them. The six walking scenarios were compared using a form of high dimensional correlated component regression (CCR) on difference data, capturing the change between one setting and another. The results showed that levels of 'engagement' were higher in the urban green space compared to those of the urban busy and urban quiet spaces, whereas levels of 'excitement' were higher in the urban busy environment compared with those of the urban green space and quiet urban space. In both cases, this effect is shown regardless of the order of exposure to these different environments. These results suggest that there are neural signatures associated with the experience of different urban spaces which may reflect the older age of the sample as well as the condition of the spaces themselves. The urban green space appears to have a restorative effect on this group of older adults.
Collapse
Affiliation(s)
- Chris Neale
- Stockholm Environment Institute, Environment Department, University of York, York, England.
| | - Peter Aspinall
- School of Built Environment, Heriot-Watt University, Edinburgh, Scotland
| | - Jenny Roe
- Center for Design and Health, School of Architecture, University of Virginia, Charlottesville, VA, USA
| | - Sara Tilley
- OPENspace Research Centre, Edinburgh College of Art, University of Edinburgh, Edinburgh, Scotland
| | | | - Steve Cinderby
- Stockholm Environment Institute, Environment Department, University of York, York, England
| | - Richard Coyne
- Edinburgh School of Architecture and Landscape Architecture, University of Edinburgh, Edinburgh, Scotland
| | - Neil Thin
- School of Social and Political Science, University of Edinburgh, Edinburgh, Scotland
| | | | - Catharine Ward Thompson
- OPENspace Research Centre, Edinburgh College of Art, University of Edinburgh, Edinburgh, Scotland
| |
Collapse
|
9
|
Li N, Ye J, Ji Y, Ling H, Yu J. Saliency Detection on Light Field. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2017; 39:1605-1616. [PMID: 27654139 DOI: 10.1109/tpami.2016.2610425] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Existing saliency detection approaches use images as inputs and are sensitive to foreground/background similarities, complex background textures, and occlusions. We explore the problem of using light fields as input for saliency detection. Our technique is enabled by the availability of commercial plenoptic cameras that capture the light field of a scene in a single shot. We show that the unique refocusing capability of light fields provides useful focusness, depths, and objectness cues. We further develop a new saliency detection algorithm tailored for light fields. To validate our approach, we acquire a light field database of a range of indoor and outdoor scenes and generate the ground truth saliency map. Experiments show that our saliency detection scheme can robustly handle challenging scenarios such as similar foreground and background, cluttered background, complex occlusions, etc., and achieve high accuracy and robustness.
Collapse
|
10
|
Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes. Conscious Cogn 2017; 51:223-235. [DOI: 10.1016/j.concog.2017.03.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Revised: 03/12/2017] [Accepted: 03/28/2017] [Indexed: 11/18/2022]
|
11
|
Toscano-Zapién AL, Velázquez-López D, Velázquez-Martínez DN. Attentional Mechanisms during the Performance of a Subsecond Timing Task. PLoS One 2016; 11:e0158508. [PMID: 27467762 PMCID: PMC4965134 DOI: 10.1371/journal.pone.0158508] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Accepted: 06/16/2016] [Indexed: 01/01/2023] Open
Abstract
There is evidence that timing processes in the suprasecond scale are modulated by attentional mechanisms; in addition, some studies have shown that attentional mechanisms also affect timing in the subsecond scale. Our aim was to study eye movements and pupil diameter during a temporal bisection task in the subsecond range. Subjects were trained to discriminate anchor intervals of 200 or 800 msec, and were then confronted with intermediate durations. Eye movements revealed that subjects used different cognitive strategies during the bisection timing task. When the stimulus to be timed appeared randomly at a central or 4 peripheral positions on a screen, some subjects choose to maintain their gaze toward the central area while other followed the peripheral placement of the stimulus; some others subjects used both strategies. The time of subjective equality did not differ between subjects who employed different attentional mechanisms. However, differences emerged in the timing variance and attentional indexes (time taken to initial fixation, latency to respond, pupil dilatation and duration and number of fixations to stimulus areas). Timing in the subsecond range seems invariant despite the use of different attentional strategies. Future research should determine whether the selection of attentional mechanisms is related to particular timing tasks or instructions or whether it represents idiosyncratic cognitive “styles”.
Collapse
Affiliation(s)
- Anna L. Toscano-Zapién
- Departamento de Psicofisiologia, Facultad de Psicología, Universidad Nacional Autónoma de México, D.F. México, 04510, México
| | - Daniel Velázquez-López
- Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México, D.F. México, 04510, México
| | - David N. Velázquez-Martínez
- Departamento de Psicofisiologia, Facultad de Psicología, Universidad Nacional Autónoma de México, D.F. México, 04510, México
- * E-mail:
| |
Collapse
|
12
|
Fernández-Martín A, Calvo MG. Extrafoveal capture of attention by emotional scenes: affective valence versus visual saliency. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1139026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
13
|
Lateralized discrimination of emotional scenes in peripheral vision. Exp Brain Res 2014; 233:997-1006. [DOI: 10.1007/s00221-014-4174-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Accepted: 12/04/2014] [Indexed: 10/24/2022]
|
14
|
Crabb DP, Smith ND, Zhu H. What's on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths. Front Aging Neurosci 2014; 6:312. [PMID: 25429267 PMCID: PMC4228197 DOI: 10.3389/fnagi.2014.00312] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 10/23/2014] [Indexed: 11/13/2022] Open
Abstract
PURPOSE We test the hypothesis that age-related neurodegenerative eye disease can be detected by examining patterns of eye movement recorded whilst a person naturally watches a movie. METHODS Thirty-two elderly people with healthy vision (median age: 70, interquartile range [IQR] 64-75 years) and 44 patients with a clinical diagnosis of glaucoma (median age: 69, IQR 63-77 years) had standard vision examinations including automated perimetry. Disease severity was measured using a standard clinical measure (visual field mean deviation; MD). All study participants viewed three unmodified TV and film clips on a computer set up incorporating the Eyelink 1000 eyetracker (SR Research, Ontario, Canada). Eye movement scanpaths were plotted using novel methods that first filtered the data and then generated saccade density maps. Maps were then subjected to a feature extraction analysis using kernel principal component analysis (KPCA). Features from the KPCA were then classified using a standard machine based classifier trained and tested by a 10-fold cross validation which was repeated 100 times to estimate the confidence interval (CI) of classification sensitivity and specificity. RESULTS Patients had a range of disease severity from early to advanced (median [IQR] right eye and left eye MD was -7 [-13 to -5] dB and -9 [-15 to -4] dB, respectively). Average sensitivity for correctly identifying a glaucoma patient at a fixed specificity of 90% was 79% (95% CI: 58-86%). The area under the Receiver Operating Characteristic curve was 0.84 (95% CI: 0.82-0.87). CONCLUSIONS Huge data from scanpaths of eye movements recorded whilst people freely watch TV type films can be processed into maps that contain a signature of vision loss. In this proof of principle study we have demonstrated that a group of patients with age-related neurodegenerative eye disease can be reasonably well separated from a group of healthy peers by considering these eye movement signatures alone.
Collapse
Affiliation(s)
- David P Crabb
- Department of Optometry and Visual Science, School of Health Sciences, City University London London, UK
| | - Nicholas D Smith
- Department of Optometry and Visual Science, School of Health Sciences, City University London London, UK
| | - Haogang Zhu
- Department of Optometry and Visual Science, School of Health Sciences, City University London London, UK
| |
Collapse
|
15
|
Calvo MG, Beltrán D, Fernández-Martín A. Processing of facial expressions in peripheral vision: Neurophysiological evidence. Biol Psychol 2014; 100:60-70. [DOI: 10.1016/j.biopsycho.2014.05.007] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 05/20/2014] [Accepted: 05/20/2014] [Indexed: 11/25/2022]
|
16
|
Facial expression recognition in peripheral versus central vision: role of the eyes and the mouth. PSYCHOLOGICAL RESEARCH 2013; 78:180-95. [DOI: 10.1007/s00426-013-0492-x] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2012] [Accepted: 04/08/2013] [Indexed: 10/27/2022]
|
17
|
Enhanced Processing of Emotional Gist in Peripheral Vision. SPANISH JOURNAL OF PSYCHOLOGY 2013; 12:414-23. [DOI: 10.1017/s1138741600001803] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Emotional (pleasant or unpleasant) and neutral scenes were presented foveally (at fixation) or peripherally (5.2° away from fixation) as primes for 150 ms. The prime was followed by a mask and a centrally presented probe scene for recognition. The probe was either identical in specific content (i.e., same people and objects) to the prime, or it was related to the prime in general content and affective valence. The probe was always different from the prime in color, size, and spatial orientation. Results showed an interaction between prime location and emotional valence for the recognition hit rate, but also for the false alarm rate and correct rejection times. There were no differences as a function of emotional valence in the foveal display condition. In contrast, in the peripheral display condition both hit and false alarm rates were higher and correct rejection times were longer for emotional than for neutral scenes. It is concluded that emotional gist, or a coarse affective impression, is extracted from emotional scenes in peripheral vision, which then leads to confuse them with others of related affective valence. The underlying neurophysiological mechanisms are discussed. An alternative explanation based on the physical characteristics of the scene images was ruled out.
Collapse
|
18
|
Doi H, Shinohara K. Task-irrelevant direct gaze facilitates visual search for deviant facial expression. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.779350] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
19
|
Shen J, Itti L. Top-down influences on visual attention during listening are modulated by observer sex. Vision Res 2012; 65:62-76. [DOI: 10.1016/j.visres.2012.06.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2011] [Revised: 05/15/2012] [Accepted: 06/01/2012] [Indexed: 11/26/2022]
|
20
|
Toet A. Computational versus psychophysical bottom-up image saliency: a comparative evaluation study. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2011; 33:2131-2146. [PMID: 21422490 DOI: 10.1109/tpami.2011.53] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The predictions of 13 computational bottom-up saliency models and a newly introduced Multiscale Contrast Conspicuity(MCC) metric are compared with human visual conspicuity measurements. The agreement between human visual conspicuity estimates and model saliency predictions is quantified through their rank order correlation. The maximum of the computational saliency value over the target support area correlates most strongly with visual conspicuity for 12 of the 13 models. A simple multiscale contrast model and the MCC metric both yield the largest correlation with human visual target conspicuity (>0:84). Local image saliency largely determines human visual inspection and interpretation of static and dynamic scenes. Computational saliency models therefore have a wide range of important applications, like adaptive content delivery, region-of-interest-based image compression, video summarization, progressive image transmission, image segmentation, image quality assessment, object recognition, and content-aware image scaling. However, current bottom-up saliency models do not incorporate important visual effects like crowding and lateral interaction. Additional knowledge about the exact nature of the interactions between the mechanisms mediating human visual saliency is required to develop these models further. The MCC metric and its associated psychophysical saliency measurement procedure are useful tools to systematically investigate the relative contribution of different feature dimensions to overall visual target saliency.
Collapse
Affiliation(s)
- Alexander Toet
- Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands.
| |
Collapse
|
21
|
Myers CW, Gray WD, Sims CR. The insistence of vision: Why do people look at a salient stimulus when it signals target absence? VISUAL COGNITION 2011. [DOI: 10.1080/13506285.2011.614379] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
22
|
Calvo MG, Nummenmaa L. Time course of discrimination between emotional facial expressions: the role of visual saliency. Vision Res 2011; 51:1751-9. [PMID: 21683730 DOI: 10.1016/j.visres.2011.06.001] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2010] [Revised: 04/08/2011] [Accepted: 06/01/2011] [Indexed: 11/15/2022]
Abstract
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions.
Collapse
Affiliation(s)
- Manuel G Calvo
- Department of Cognitive Psychology, University of La Laguna, Tenerife, Spain.
| | | |
Collapse
|
23
|
Madhavan P, Lacson FC, Gonzalez C, Brennan PC. The Role of Incentive Framing on Training and Transfer of Learning in a Visual Threat Detection Task. APPLIED COGNITIVE PSYCHOLOGY 2011. [DOI: 10.1002/acp.1807] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
24
|
Sanders-Jackson AN, Cappella JN, Linebarger DL, Piotrowski JT, O'Keeffe M, Strasser AA. Visual Attention to Antismoking PSAs: Smoking Cues Versus Other Attention-Grabbing Features. HUMAN COMMUNICATION RESEARCH 2011; 37:275-292. [PMID: 23136462 PMCID: PMC3489183 DOI: 10.1111/j.1468-2958.2010.01402.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study examines how addicted smokers attend visually to smoking-related public service announcements (PSAs) in adults smokers. Smokers' onscreen visual fixation is an indicator of cognitive resources allocated to visual attention. Characteristic of individuals with addictive tendencies, smokers are expected to be appetitively activated by images of their addiction-specifically smoking cues. At the same time, these cues are embedded in messages that associate avoidance responses with these appetitive cues, potentially inducing avoidance of PSA processing. Findings suggest that segments of PSAs that contain smoking cues are processed similarly to segments that contain complex stimuli (operationalized in this case as high in information introduced) and that visual attention is aligned with smoking cues on the screen.
Collapse
Affiliation(s)
- Ashley N. Sanders-Jackson
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104, USA
- Center of Excellence in Cancer Communication Research, University of Pennsylvania, Philadelphia, PA, USA
| | - Joseph N. Cappella
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Deborah L. Linebarger
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | | - Moira O'Keeffe
- Department of Communication, Bellarmine University, Louisville, KY, USA
| | - Andrew A. Strasser
- Center for Interdisciplinary Research on Nicotine Addiction, University of Pennsylvania, Philadelphia, PA, USA
- Center of Excellence in Cancer Communication Research, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
25
|
Hoyer WJ, Cerella J, Buchler NG. A search-by-clusters model of visual search: fits to data from younger and older adults. J Gerontol B Psychol Sci Soc Sci 2011; 66:402-10. [PMID: 21459772 DOI: 10.1093/geronb/gbr022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVES This study aims to specify the processing operations underlying age-related differences in the speed and accuracy of visual search in a mathematical model. METHOD Eighteen older and 18 young adults searched for a predesignated target within 24-degree visual arrays containing distractors. Targets were systematically placed in regions that extended 2.5, 5.0, 7.5, and 10 degrees from center. Data were fitted to several versions of a mathematical model in which it was assumed that target search proceeds from the center fixation to peripheral areas in a succession of visual inspections of clusters until the target is located and that clusters can vary in size in response to search difficulty. RESULTS Eccentricity effects on latencies and errors were larger for older adults than for younger adults, especially in the hardest search condition. The best-fitting version of the "search-by-clusters" model accounted for an average of 98.4% and 95.4% of the variance in the young and older adults, respectively. The resulting time, accuracy, and cluster parameters behaved plausibly in each of the 36 data sets. CONCLUSIONS A quantitative model that specified how individuals searched for targets in large arrays accurately predicted the search times and accuracies of younger and older adults.
Collapse
Affiliation(s)
- William J Hoyer
- Department of Psychology, Syracuse University, Syracuse, New York 13244-2340, USA.
| | | | | |
Collapse
|
26
|
|
27
|
|
28
|
Van der Stigchel S, Belopolsky AV, Peters JC, Wijnen JG, Meeter M, Theeuwes J. The limits of top-down control of visual attention. Acta Psychol (Amst) 2009; 132:201-12. [PMID: 19635610 DOI: 10.1016/j.actpsy.2009.07.001] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2008] [Revised: 06/26/2009] [Accepted: 07/01/2009] [Indexed: 11/17/2022] Open
Abstract
The extent to which spatial selection is driven by the goals of the observer and by the properties of the environment is one of the major issues in the field of visual attention. Here we review recent experimental evidence from behavioral and eye movement studies suggesting that top-down control has temporal and spatial limits. More specifically, we argue that the first feedforward sweep of information is bottom-up, and that top-down control can influence selection only after the sweep is completed. In addition, top-down control can limit spatial selection through adjusting the size of attentional window, an area of visual space which receives priority in information sampling. Finally, we discuss the evidence found using brain imaging techniques for top-down control in an attempt to reconcile it with behavioral findings. We conclude by discussing theoretical implications of these results for the current models of visual selection.
Collapse
Affiliation(s)
- Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 2, Utrecht, The Netherlands.
| | | | | | | | | | | |
Collapse
|
29
|
Serrano-Gotarredona R, Oster M, Lichtsteiner P, Linares-Barranco A, Paz-Vicente R, Gomez-Rodriguez F, Camunas-Mesa L, Berner R, Rivas-Perez M, Delbruck T, Liu SC, Douglas R, Hafliger P, Jimenez-Moreno G, Civit Ballcels A, Serrano-Gotarredona T, Acosta-Jimenez AJ, Linares-Barranco B. CAVIAR: A 45k Neuron, 5M Synapse, 12G Connects/s AER Hardware Sensory–Processing– Learning–Actuating System for High-Speed Visual Object Recognition and Tracking. ACTA ACUST UNITED AC 2009; 20:1417-38. [PMID: 19635693 DOI: 10.1109/tnn.2009.2023653] [Citation(s) in RCA: 265] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
30
|
Chen D, Zhang L, Weng J. Spatio-temporal adaptation in the unsupervised development of networked visual neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS 2009; 20:992-1008. [PMID: 19457750 DOI: 10.1109/tnn.2009.2015082] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
There have been many computational models mimicking the visual cortex that are based on spatial adaptations of unsupervised neural networks. In this paper, we present a new model called neuronal cluster which includes spatial as well as temporal weights in its unified adaptation scheme. The "in-place" nature of the model is based on two biologically plausible learning rules, Hebbian rule and lateral inhibition. We present the mathematical demonstration that the temporal weights are derived from the delay in lateral inhibition. By training with the natural videos, this model can develop spatio-temporal features such as orientation selective cells, motion sensitive cells, and spatio-temporal complex cells. The unified nature of the adaptation scheme allows us to construct a multilayered and task-independent attention selection network which uses the same learning rule for edge, motion, and color detection, and we can use this network to engage in attention selection in both static and dynamic scenes.
Collapse
Affiliation(s)
- Dongyue Chen
- Department of Electronic Engineering, Fudan University, Shanghai 200433, China.
| | | | | |
Collapse
|
31
|
Coen-Cagli R, Coraggio P, Napoletano P, Schwartz O, Ferraro M, Boccignone G. Visuomotor characterization of eye movements in a drawing task. Vision Res 2009; 49:810-8. [PMID: 19268685 DOI: 10.1016/j.visres.2009.02.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2008] [Revised: 02/19/2009] [Accepted: 02/25/2009] [Indexed: 11/28/2022]
Abstract
Understanding visuomotor coordination requires the study of tasks that engage mechanisms for the integration of visual and motor information; in this paper we choose a paradigmatic yet little studied example of such a task, namely realistic drawing. On the one hand, our data indicate that the motor task has little influence on which regions of the image are overall most likely to be fixated: salient features are fixated most often. Viceversa, the effect of motor constraints is revealed in the temporal aspect of the scanpaths: (1) subjects direct their gaze to an object mostly when they are acting upon (drawing) it; and (2) in support of graphically continuous hand movements, scanpaths resemble edge-following patterns along image contours. For a better understanding of such properties, a computational model is proposed in the form of a novel kind of Dynamic Bayesian Network, and simulation results are compared with human eye-hand data.
Collapse
Affiliation(s)
- Ruben Coen-Cagli
- Department of Neuroscience, Albert Einstein College of Medicine of Yeshiva University, 1410 Pelham Pkwy S., Rm 921, Bronx, New York 10461, USA.
| | | | | | | | | | | |
Collapse
|
32
|
|
33
|
Rajashekar U, van der Linde I, Bovik AC, Cormack LK. GAFFE: a gaze-attentive fixation finding engine. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:564-573. [PMID: 18390364 DOI: 10.1109/tip.2008.917218] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.
Collapse
Affiliation(s)
- U Rajashekar
- New York University, New York, NY 10003-6603, USA.
| | | | | | | |
Collapse
|
34
|
Rajashekar U, van der Linde I, Bovik AC, Cormack LK. Foveated analysis of image features at fixations. Vision Res 2007; 47:3160-72. [PMID: 17889221 DOI: 10.1016/j.visres.2007.07.015] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2006] [Revised: 03/24/2007] [Indexed: 11/15/2022]
Abstract
Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, in which image patches were analyzed at the resolution corresponding to their eccentricity from the prior fixation, we studied the statistics of four low-level local image features: luminance, RMS contrast, and bandpass outputs of both luminance and contrast, and discovered that the image patches around human fixations had, on average, higher values of each of these features at all eccentricities than the image patches selected at random. Bandpass contrast showed the greatest difference between human and random fixations, followed by bandpass luminance, RMS contrast, and luminance. An eccentricity-based analysis showed that shorter saccades were more likely to land on patches with higher values of these features. Compared to a full-resolution analysis, foveation produced an increased difference between human and random patch ensembles for contrast and its higher-order statistics.
Collapse
Affiliation(s)
- Umesh Rajashekar
- Center for Perceptual Systems, The University of Texas at Austin, USA.
| | | | | | | |
Collapse
|
35
|
Underwood G, Templeman E, Lamming L, Foulsham T. Is attention necessary for object identification? Evidence from eye movements during the inspection of real-world scenes. Conscious Cogn 2007; 17:159-70. [PMID: 17222564 DOI: 10.1016/j.concog.2006.11.008] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2006] [Revised: 11/27/2006] [Accepted: 11/30/2006] [Indexed: 11/20/2022]
Abstract
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.
Collapse
|
36
|
|
37
|
|