251
|
Nickel AE, Hopkins LS, Minor GN, Hannula DE. Attention capture by episodic long-term memory. Cognition 2020; 201:104312. [PMID: 32387722 DOI: 10.1016/j.cognition.2020.104312] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 03/16/2020] [Accepted: 04/19/2020] [Indexed: 10/24/2022]
Abstract
Everyday behavior depends upon the operation of concurrent cognitive processes. In visual search, studies that examine memory-attention interactions have indicated that long-term memory facilitates search for a target (e.g., contextual cueing), but the potential for memories to capture attention and decrease search efficiency has not been investigated. To address this gap in the literature, five experiments were conducted to examine whether task-irrelevant encoded objects might capture attention. In each experiment, participants encoded scene-object pairs. Then, in a visual search task, 6-object search displays were presented and participants were told to make a single saccade to targets defined by shape (e.g., diamond among differently colored circles; Experiments 1, 4, and 5) or by color (e.g., blue shape among differently shaped gray objects; Experiments 2 and 3). Sometimes, one of the distractors was from the encoded set, and occasionally the scene that had been paired with that object was presented prior to the search display. Results indicated that eye movements were made, in error, more often to encoded distractors than to baseline distractors, and that this effect was greatest when the corresponding scene was presented prior to search. When capture did occur, participants looked longer at encoded distractors if scenes had been presented, an effect that we attribute to the representational match between a retrieved associate and the identity of the encoded distractor in the search display. In addition, the presence of a scene resulted in slower saccade deployment when participants made first saccades to targets, as instructed. Experiments 4 and 5 suggest that this slowdown may be due to the relatively rare and therefore, surprising, appearance of visual stimulus information prior to search. Collectively, results suggest that information encoded into episodic memory can capture attention, which is consistent with the recent proposal that selection history can guide attentional selection.
Collapse
Affiliation(s)
- Allison E Nickel
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Lauren S Hopkins
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Greta N Minor
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Deborah E Hannula
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
252
|
Prior target locations attract overt attention during search. Cognition 2020; 201:104282. [PMID: 32387723 DOI: 10.1016/j.cognition.2020.104282] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 03/25/2020] [Accepted: 03/27/2020] [Indexed: 01/22/2023]
Abstract
A key question about visual search is how we guide attention to objects that are relevant to our goals. Traditionally, theories of visual attention have emphasized guidance by explicit knowledge of the target feature. But there is growing evidence that attention is also implicitly guided by prior experience. One such example is the phenomenon of location priming, whereby attention is automatically allocated to the location where the search target was previously found. Problematically, much of the previous evidence for location priming has been disputed because it relies exclusively on manual response time, making unclear the relative contribution of location priming on attentional allocation and later cognitive processes. The current study addressed this issue by measuring shifts of gaze, which provide a more direct measure of attentional orienting. In five experiments, first saccades were strongly attracted to the target location from the previous trial, even though this location was not predictive of the target location on the current trial. This oculomotor priming effect was so strong that it effectively disrupted attentional guidance to the search target. The results suggest that memories of recent experience can powerfully influence attentional allocation.
Collapse
|
253
|
Chen S, Shi Z, Zang X, Zhu X, Assumpção L, Müller HJ, Geyer T. Crossmodal learning of target-context associations: When would tactile context predict visual search? Atten Percept Psychophys 2020; 82:1682-1694. [PMID: 31845105 PMCID: PMC7297845 DOI: 10.3758/s13414-019-01907-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
It is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.
Collapse
Affiliation(s)
- Siyi Chen
- General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany.
| | - Zhuanghua Shi
- General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany
| | - Xuelian Zang
- Center for Cognition and Brain Disorders, Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China
| | - Xiuna Zhu
- General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany
| | - Leonardo Assumpção
- General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany
| | - Hermann J Müller
- General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany
| | - Thomas Geyer
- General and Experimental Psychology, Department of Psychology, LMU Munich, Leopoldstr 13, 80802, Munich, Germany
| |
Collapse
|
254
|
Thornton IM. MILO Mobile: An iPad App to Measure Search Performance in Multi-Target Sequences. Iperception 2020; 11:2041669520932587. [PMID: 32612800 PMCID: PMC7307404 DOI: 10.1177/2041669520932587] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 05/15/2020] [Indexed: 01/18/2023] Open
Abstract
This article introduces a mobile app version of the Multi-Item Localization (MILO) task. The MILO task was designed to explore the temporal context of search through a sequence and has proven useful in both basic and applied research settings. Here, we describe the basic features of the app and how it can be obtained, installed, and modified. We also provide example data files and present two new sets of empirical data to verify that previous findings concerning prospective planning and retrospective memory (i.e., inhibitory tagging) are reproducible with the app. We conclude by discussing ongoing studies and future modifications that illustrate the flexibility and potential of the MILO Mobile app.
Collapse
Affiliation(s)
- Ian M. Thornton
- Department of Cognitive Science,
Faculty of Media and Knowledge Sciences, University of
Malta
| |
Collapse
|
255
|
Halfen EJ, Magnotti JF, Rahman MS, Yau JM. Principles of tactile search over the body. J Neurophysiol 2020; 123:1955-1968. [PMID: 32233886 DOI: 10.1152/jn.00694.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Although we routinely experience complex tactile patterns over our entire body, how we selectively experience multisite touch over our bodies remains poorly understood. Here, we characterized tactile search behavior over the full body using a tactile analog of the classic visual search task. On each trial, participants judged whether a target stimulus (e.g., 10-Hz vibration) was present or absent anywhere on the body. When present, the target stimulus could occur alone or simultaneously with distractor stimuli (e.g., 30-Hz vibrations) on other body locations. We systematically varied the number and spatial configurations of the distractors as well as the target and distractor frequencies and measured the impact of these factors on tactile search response times. First, we found that response times were faster on target-present trials compared with target-absent trials. Second, response times increased with the number of stimulated sites, suggesting a serial search process. Third, search performance differed depending on stimulus frequencies. This frequency-dependent behavior may be related to perceptual grouping effects based on timing cues. We constructed linear models to explore how the locations of the target and distractor cues influenced tactile search behavior. Our modeling results reveal that, in isolation, cues on the index fingers make relatively greater contributions to search performance compared with stimulation experienced on other body sites. Additionally, costimulation of sites within the same limb or simply on the same body side preferentially influence search behavior. Our collective findings identify some principles of attentional search that are common to vision and touch, but others that highlight key differences that may be unique to body-based spatial perception.NEW & NOTEWORTHY Little is known about how we selectively experience multisite touch patterns over the body. Using a tactile analog of the classic visual target search paradigm, we show that tactile search behavior for flutter cues is generally consistent with a serial search process. Modeling results reveal the preferential contributions of index finger stimulation and two-site stimulus interactions involving ipsilateral patterns and within-limb patterns. Our results offer initial evidence for spatial and temporal principles underlying tactile search behavior over the body.
Collapse
Affiliation(s)
- Elizabeth J Halfen
- Departments of Neuroscience and Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - John F Magnotti
- Departments of Neuroscience and Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Md Shoaibur Rahman
- Departments of Neuroscience and Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Jeffrey M Yau
- Departments of Neuroscience and Neurosurgery, Baylor College of Medicine, Houston, Texas
| |
Collapse
|
256
|
Grubert A, Eimer M. Preparatory Template Activation during Search for Alternating Targets. J Cogn Neurosci 2020; 32:1525-1535. [PMID: 32319869 DOI: 10.1162/jocn_a_01565] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual search is guided by representations of target-defining features (attentional templates). We tracked the time course of template activation processes during the preparation for search in a task where the identity of color-defined search targets switched across successive trials (ABAB). Task-irrelevant color probes that matched either the upcoming relevant target color or the previous now-irrelevant target color were presented every 200 msec during the interval between search displays. N2pc components (markers of attentional capture) were measured for both types of probes at each time point. A reliable probe N2pc indicates that the corresponding color template is active at the time when the probe appears. N2pcs of equal size emerged from 1000 msec before search display onset for both relevant-color and irrelevant-color probes, demonstrating that both color templates were activated concurrently. Evidence for color-selective attentional control was found only immediately before the arrival of the search display, where N2pcs were larger for relevant-color probes. These results reveal important limitations in the executive control of search preparation in tasks where two targets alternate across trials. Although the identity of the upcoming target is fully predictable, both task-relevant and task-irrelevant target templates are coactivated. Knowledge about target identity selectively biases these template activation processes in a temporally discrete fashion, guided by temporal expectations about when the target template will become relevant.
Collapse
|
257
|
Abstract
Four decades of studies in visual attention and visual working memory used visual features such as colors, orientations, and shapes. The layout of their featural space is clearly established for most features (e.g., CIE-Lab for colors) but not shapes. Here, I attempted to reveal the basic dimensions of preattentive shape features by studying how shapes can be positioned relative to one another in a way that matches their perceived similarities. Specifically, 14 shapes were optimized as n-dimensional vectors to achieve the highest linear correlation (r) between the log-distances between C (14, 2) = 91 pairs of shapes and the discriminabilities (d') of these 91 pairs in a texture segregation task. These d' values were measured on a large sample (N = 200) and achieved high reliability (Cronbach's α = 0.982). A vast majority of variances in the results (r = 0.974) can be explained by a three-dimensional SCI shape space: segmentability, compactness, and spikiness.
Collapse
Affiliation(s)
- Liqiang Huang
- Department of Psychology, Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
258
|
Sobel KV, Puri AM, York AK. Visual search inverts the classic Stroop asymmetry. Acta Psychol (Amst) 2020; 205:103054. [PMID: 32151791 DOI: 10.1016/j.actpsy.2020.103054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 02/20/2020] [Accepted: 03/02/2020] [Indexed: 10/24/2022] Open
Abstract
The Stroop effect is typically much larger than the reverse Stroop effect. One explanation for this asymmetry asserts that interference between the attended feature and an incongruent unattended feature depends on which feature is more strongly associated with the processing typically needed to complete the task. Accordingly, because identification of the target's color or the target word (as in the traditional Stroop paradigm) is more strongly associated with verbal processing than visual processing, the target's meaning should interfere with identification of the target's color (Stroop) more than vice versa (reverse Stroop). In contrast, localization is more strongly associated with visual processing, so strength-of-association predicts that the target's color should interfere with localizing the target word (reverse Stroop) more than vice versa (Stroop). Experiments 1 and 2 supported the strength-of-association account: compared to Stroop, the reverse Stroop effect was smaller for an identification task, but larger for a localization task. Because overall responses were slower for the reverse Stroop condition than the Stroop condition in Experiment 2, we entertained two alternative explanations for the reverse Stroop effect being larger than the Stroop effect. Experiments 3 and 4 showed that the larger reverse Stroop effect could not have been due to scaling, and Experiment 5 showed that it could not have been due to covert translation. Taken together, these experiments demonstrate the role of strength of association in generating the classic Stroop asymmetry, and pave the way for future exploration of the reverse Stroop effect using localization tasks.
Collapse
|
259
|
Hurgobin Y, Le Floch V, Lemercier C. Effect of multiple extrinsic cues on consumers’ willingness to buy apples: A scenario-based study. Food Qual Prefer 2020. [DOI: 10.1016/j.foodqual.2019.103860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
260
|
Furtak M, Doradzińska Ł, Ptashynska A, Mudrik L, Nowicka A, Bola M. Automatic Attention Capture by Threatening, But Not by Semantically Incongruent Natural Scene Images. Cereb Cortex 2020; 30:4158-4168. [DOI: 10.1093/cercor/bhaa040] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 01/22/2020] [Accepted: 02/03/2020] [Indexed: 12/15/2022] Open
Abstract
Abstract
Visual objects are typically perceived as parts of an entire visual scene, and the scene’s context provides information crucial in the object recognition process. Fundamental insights into the mechanisms of context-object integration have come from research on semantically incongruent objects, which are defined as objects with a very low probability of occurring in a given context. However, the role of attention in processing of the context-object mismatch remains unclear, with some studies providing evidence in favor, but other against an automatic capture of attention by incongruent objects. Therefore, in the present study, 25 subjects completed a dot-probe task, in which pairs of scenes—congruent and incongruent or neutral and threatening—were presented as task-irrelevant distractors. Importantly, threatening scenes are known to robustly capture attention and thus were included in the present study to provide a context for interpretation of results regarding incongruent scenes. Using N2 posterior-contralateral ERP component as a primary measure, we revealed that threatening images indeed capture attention automatically and rapidly, but semantically incongruent scenes do not benefit from an automatic attentional selection. Thus, our results suggest that identification of the context-object mismatch is not preattentive.
Collapse
Affiliation(s)
- Marcin Furtak
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Łucja Doradzińska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Alina Ptashynska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Liad Mudrik
- School of Psychological Science, Tel Aviv University, 69978 Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Anna Nowicka
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Michał Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| |
Collapse
|
261
|
Baumeler D, Nako R, Born S, Eimer M. Attentional repulsion effects produced by feature-guided shifts of attention. J Vis 2020; 20:10. [PMID: 32232375 PMCID: PMC7405701 DOI: 10.1167/jov.20.3.10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 01/14/2020] [Indexed: 11/24/2022] Open
Abstract
Attention shifts to particular objects in the visual field can distort perceptual location judgments. Visual stimuli are perceived to be shifted away from the current focus of attention (the attentional repulsion effect [ARE]). Although links between repulsion effects and stimulus-driven exogenous attentional capture have been demonstrated conclusively, it remains disputed whether AREs can also be elicited as a result of feature-guided attention shifts that are controlled by endogenous task sets. Here we demonstrate that this is indeed the case. Color singleton cues that appeared together with equiluminant gray items triggered repulsion effects only if they matched a current task-relevant color but not when their color was irrelevant. When target-color and nontarget-color singleton cues appeared in the same display, AREs emerged relative to the position of the target-color cue. By obtaining independent behavioral measures of perceptual repulsion and electrophysiological measures of attentional capture by target-color cues, we also showed that these two phenomena are correlated. Individuals who were more susceptible to attentional capture also produced larger AREs. These results confirm the existence of links between task-set contingent attentional capture and AREs. They also provide the first direct demonstration of the attentional nature of these effects with online brain activity measures: perceptual repulsion arises as the result of prior feature-guided attention shifts to specific locations in the visual field.
Collapse
Affiliation(s)
- Denise Baumeler
- Faculté de Psychologie et des Sciences de l’Éducation, Université de Genève, Genève, Switzerland
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Rebecca Nako
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Sabine Born
- Faculté de Psychologie et des Sciences de l’Éducation, Université de Genève, Genève, Switzerland
| | - Martin Eimer
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
262
|
Expertise effects on attention and eye-movement control during visual search: Evidence from the domain of music reading. Atten Percept Psychophys 2020; 82:2201-2208. [PMID: 32124250 DOI: 10.3758/s13414-020-01979-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Experts in many domains use their domain-specific knowledge to rapidly locate relevant information. To explore this ability in music reading, we contrasted the eye movements of 30 expert musicians (with at least 10 years of music reading training) and 30 non-musicians (who could not read music) while they completed a visual search task that required them to match a section of a complex piano music score (i.e., the search template) to its identical counterpart within a larger music score (i.e., the search array). Critically, both the search template and array were presented simultaneously throughout each trial in the experiment, which allowed for visual comparisons between the search template and the array. Relative to the non-musicians, the experts had higher accuracy and also spent more time looking at the relevant regions and less time looking at irrelevant regions. Also, as evidence that the experts and non-musicians adopted qualitatively different search strategies, the experts spent more time than non-musicians looking at the search template at the beginning of the trial, and the experts returned to this region less often than non-musicians. Taken together, our results indicate that experts use domain-specific knowledge in the form of "chunks" (Chase & Simon, 1973a, 1973b) and "templates" (Gobet & Simon, 1996b, 2000) to acquire accurate representations of highly complex search templates.
Collapse
|
263
|
Wu CC, D’Ardenne NM, Nishikawa RM, Wolfe JM. Gist processing in digital breast tomosynthesis. J Med Imaging (Bellingham) 2020; 7:022403. [PMID: 31853462 PMCID: PMC6917568 DOI: 10.1117/1.jmi.7.2.022403] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 11/18/2019] [Indexed: 01/16/2023] Open
Abstract
Evans et al. (2016) showed that radiologists can classify the mammograms as normal or abnormal at above-chance levels after a 250-ms exposure. Our study documents a similar gist signal in digital breast tomosynthesis (DBT) images. DBT is a relatively new technology that creates a three-dimensional image set of slices through the volume of the breast. It improves performance over two-dimensional (2-D) mammography but at a cost in reading time. In the experiment presented, radiologists ( N = 16 ) viewed "movies" of DBT images from single breasts for an average of 1.5 s per case. Observers then marked the most likely lesion position on a blank outline and rated each case on a six-point scale from (1) certainly normal to (6) certainly recall. Results show that radiologists can discriminate normal from abnormal DBT cases at above-chance levels as in 2-D mammography. Ability was correlated with experience reading DBT. Observers performed at above-chance levels, even on those images where they could not localize the target, suggesting that this is a global signal that could prove valuable in the clinic.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Brigham and Women’s Hospital, Visual Attention Laboratory, Department of Surgery, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| | - Nicholas M. D’Ardenne
- University of Pittsburgh, Department of Radiology, Pittsburgh, Pennsylvania, United States
| | - Robert M. Nishikawa
- University of Pittsburgh, Department of Radiology, Pittsburgh, Pennsylvania, United States
| | - Jeremy M. Wolfe
- Brigham and Women’s Hospital, Visual Attention Laboratory, Department of Surgery, Boston, Massachusetts, United States
- Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
264
|
Fu D, Weber C, Yang G, Kerzel M, Nan W, Barros P, Wu H, Liu X, Wermter S. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective. Front Integr Neurosci 2020; 14:10. [PMID: 32174816 PMCID: PMC7056875 DOI: 10.3389/fnint.2020.00010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 02/11/2020] [Indexed: 11/13/2022] Open
Abstract
Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.
Collapse
Affiliation(s)
- Di Fu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Cornelius Weber
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Guochun Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Matthias Kerzel
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Weizhi Nan
- Department of Psychology, Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Pablo Barros
- Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Haiyan Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xun Liu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Stefan Wermter
- Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
265
|
Ernst D, Becker S, Horstmann G. Novelty competes with saliency for attention. Vision Res 2020; 168:42-52. [PMID: 32088400 DOI: 10.1016/j.visres.2020.01.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 11/28/2019] [Accepted: 01/07/2020] [Indexed: 11/29/2022]
Abstract
A highly debated question in attention research is to what extent attention is biased by bottom-up factors such as saliency versus top-down factors as governed by the task. Visual search experiments in which participants are briefly familiarized with the task and then see a novel stimulus unannounced and for the first time support yet another factor, showing that novel and surprising features attract attention. In the present study, we tested whether gaze behavior as an indicator for attentional prioritization can be predicted accurately within displays containing both salient and novel stimuli by means of a priority map that assumes novelty as an additional source of activation. To that aim, we conducted a visual search experiment where a color singleton was presented for the first time in the surprise trial and manipulated the color-novelty of the remaining non-singletons between participants. In one group, the singleton was the only novel stimulus ("one-new"), whereas in another group, the non-singleton stimuli were likewise novel ("all-new"). The surprise trial was always target absent and designed such that top-down prioritization of any color was unlikely. The results show that the singleton in the all-new group captured the gaze less strongly, with more early fixations being directed to the novel non-singletons. Overall, the fixation pattern can accurately be explained by noisy priority maps where saliency and novelty compete for gaze control.
Collapse
|
266
|
Search and concealment strategies in the spatiotemporal domain. Atten Percept Psychophys 2020; 82:2393-2414. [PMID: 32052344 DOI: 10.3758/s13414-020-01976-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although visual search studies have primarily focused on search behavior, concealment behavior is also important in the real world. However, previous studies in this regard are limited in that their findings about search and concealment strategies are restricted to the spatial (two-dimensional) domain. Thus, this study evaluated strategies during three-dimensional and temporal (i.e., spatiotemporal) search and concealment to determine whether participants would indicate where they would hide or find a target in a temporal sequence of items. The items were stacked in an upward (Experiments 1-3) or downward (Experiment 4) direction and three factors were manipulated: scenario (hide vs. seek), partner type (friend vs. foe), and oddball (unique item in the sequence; present vs. absent). Participants in both the hide and seek scenarios frequently selected the oddball for friends but not foes, which suggests that they applied common strategies because the oddball automatically attracts attention and can be readily discovered by friends. Additionally, a principle unique to the spatiotemporal domain was revealed, i.e., when the oddball was absent, participants in both scenarios frequently selected the topmost item of the stacked layer for friends, regardless of temporal order, whereas they selected the first item in the sequence for foes, regardless of the stacked direction. These principles were not affected by visual masking or number of items in the sequence. Taken together, these results suggest that finding and hiding positions in the spatiotemporal domain rely on the presence of salient items and physical accessibility or temporal remoteness, according to partner type.
Collapse
|
267
|
Cimminella F, Sala SD, Coco MI. Extra-foveal Processing of Object Semantics Guides Early Overt Attention During Visual Search. Atten Percept Psychophys 2020; 82:655-670. [PMID: 31792893 PMCID: PMC7246246 DOI: 10.3758/s13414-019-01906-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Eye-tracking studies using arrays of objects have demonstrated that some high-level processing of object semantics can occur in extra-foveal vision, but its role on the allocation of early overt attention is still unclear. This eye-tracking visual search study contributes novel findings by examining the role of object-to-object semantic relatedness and visual saliency on search responses and eye-movement behaviour across arrays of increasing size (3, 5, 7). Our data show that a critical object was looked at earlier and for longer when it was semantically unrelated than related to the other objects in the display, both when it was the search target (target-present trials) and when it was a target's semantically related competitor (target-absent trials). Semantic relatedness effects manifested already during the very first fixation after array onset, were consistently found for increasing set sizes, and were independent of low-level visual saliency, which did not play any role. We conclude that object semantics can be extracted early in extra-foveal vision and capture overt attention from the very first fixation. These findings pose a challenge to models of visual attention which assume that overt attention is guided by the visual appearance of stimuli, rather than by their semantics.
Collapse
Affiliation(s)
- Francesco Cimminella
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK.
- Laboratory of Experimental Psychology, Suor Orsola Benincasa University, Naples, Italy.
| | - Sergio Della Sala
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK
| | - Moreno I Coco
- Human Cognitive Neuroscience, Psychology, University of Edinburgh, Edinburgh, UK.
- School of Psychology, The University of East London, London, UK.
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal.
| |
Collapse
|
268
|
Grieben R, Tekülve J, Zibner SKU, Lins J, Schneegans S, Schöner G. Scene memory and spatial inhibition in visual search : A neural dynamic process model and new experimental evidence. Atten Percept Psychophys 2020; 82:775-798. [PMID: 32048181 PMCID: PMC7246253 DOI: 10.3758/s13414-019-01898-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.
Collapse
Affiliation(s)
- Raul Grieben
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Jan Tekülve
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Stephan K. U. Zibner
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Jonas Lins
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | | | - Gregor Schöner
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| |
Collapse
|
269
|
Lee TH, Kim SH, Katz B, Mather M. The Decline in Intrinsic Connectivity Between the Salience Network and Locus Coeruleus in Older Adults: Implications for Distractibility. Front Aging Neurosci 2020; 12:2. [PMID: 32082136 PMCID: PMC7004957 DOI: 10.3389/fnagi.2020.00002] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Accepted: 01/08/2020] [Indexed: 11/26/2022] Open
Abstract
We examined functional connectivity between the locus coeruleus (LC) and the salience network in healthy young and older adults to investigate why people become more prone to distraction with age. Recent findings suggest that the LC plays an important role in focusing processing on salient or goal-relevant information from multiple incoming sensory inputs (Mather et al., 2016). We hypothesized that the connection between LC and the salience network declines in older adults, and therefore the salience network fails to appropriately filter out irrelevant sensory signals. To examine this possibility, we used resting-state-like fMRI data, in which all task-related activities were regressed out (Fair et al., 2007; Elliott et al., 2019) and performed a functional connectivity analysis based on the time-course of LC activity. Older adults showed reduced functional connectivity between the LC and salience network compared with younger adults. Additionally, the salience network was relatively more coupled with the frontoparietal network than the default-mode network in older adults compared with younger adults, even though all task-related activities were regressed out. Together, these findings suggest that reduced interactions between LC and the salience network impairs the ability to prioritize the importance of incoming events, and in turn, the salience network fails to initiate network switching (e.g., Menon and Uddin, 2010; Uddin, 2015) that would promote further attentional processing. A chronic lack of functional connection between LC and salience network may limit older adults' attentional and executive control resources.
Collapse
Affiliation(s)
- Tae-Ho Lee
- Department of Psychology, Virginia Tech, Blacksburg, VA, United States
| | - Sun Hyung Kim
- Department of Psychiatry, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Benjamin Katz
- Department of Human Development and Family Science, Virginia Tech, Blacksburg, VA, United States
| | - Mara Mather
- Davis School of Gerontology, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
270
|
Abstract
Visual attention can sometimes be involuntarily captured by salient stimuli, and this may lead to impaired performance in a variety of real-world tasks. If observers were aware that their attention was being captured, they might be able to exert control and avoid subsequent distraction. However, it is unknown whether observers can detect attention capture when it occurs. In the current study, participants searched for a target shape and attempted to ignore a salient color distractor. On a subset of trials, participants then immediately classified whether the salient distractor captured their attention ("capture" vs. "no capture"). Participants were slower and less accurate at detecting the target on trials on which they reported "capture" than "no capture." Follow-up experiments revealed that participants specifically detected covert shifts of attention to the salient item. Altogether, these results indicate that observers can have immediate awareness of visual distraction, at least under certain circumstances.
Collapse
|
271
|
|
272
|
Cho SA, Cho YS. Attentional Orienting by Non-informative Cue Is Shaped via Reinforcement Learning. Front Psychol 2020; 10:2884. [PMID: 32010011 PMCID: PMC6974624 DOI: 10.3389/fpsyg.2019.02884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2019] [Accepted: 12/05/2019] [Indexed: 11/13/2022] Open
Abstract
It has been demonstrated that a reward-associated stimulus feature captures attention involuntarily. The present study tested whether spatial attentional orienting is biased via reinforcement learning. Participants were to identify a target stimulus presented in one of two placeholders, preceded by a non-informative arrow cue at the center of the display. Importantly, reward was available when the target occurred at a location cued by a reward cue, defined as a specific color (experiments 1 and 3) or a color-direction combination (experiment 2). The attentional bias of the reward cue was significantly increased as trials progressed, resulting in a greater cue-validity effect for the reward cue than the no-reward cue. This attentional bias was still evident even when controlling for the possibility that the incentive salience of the reward cue color modulates the cue-validity effect (experiment 2) or when the reward was withdrawn after reinforcement learning (experiment 3). However, it disappeared when the reward was provided regardless of cue validity (experiment 4), implying that the reinforcement contingency between reward and attentional orienting is a critical determinant of reinforcement learning-based spatial attentional modulation. Our findings highlight that a spatial attentional bias is shaped by value via reinforcement learning.
Collapse
Affiliation(s)
| | - Yang Seok Cho
- Department of Psychology, Korea University, Seoul, South Korea
| |
Collapse
|
273
|
Witkowski M, Tomczak E, Łuczak M, Bronikowski M, Tomczak M. Fighting Left Handers Promotes Different Visual Perceptual Strategies than Right Handers: The Study of Eye Movements of Foil Fencers in Attack and Defence. BIOMED RESEARCH INTERNATIONAL 2020; 2020:4636271. [PMID: 32420345 PMCID: PMC7201802 DOI: 10.1155/2020/4636271] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/15/2019] [Accepted: 11/23/2019] [Indexed: 11/17/2022]
Abstract
Left handers have long held the edge over right handers in one-on-one interactive combat sports. Particularly in fencing, top rankings show a relatively strong overrepresentation of left handers over right handers. Whether this can be attributed to perceptual strategies used by fencers in their bouts remains to be established. This study aims to verify whether right-handed fencers assess their opponents' behaviour based on different perceptual strategies when fencing a left vs. right hander. Twelve top-level (i.e., Olympic fencers, Junior World Team Fencing Champions, and top Polish senior foil fencers) right-handed female foil fencers (aged 16-30 years) took part in the study. They performed a total of 40 actions: 10 repetitions of offensive actions (attack) and 10 repetitions of defensive actions (defence), each type of action performed under 2 conditions (right- vs. left-handed opponent). While the participants were fencing, their eye movements were being recorded with a remote eye-tracker (SMI ETG 2.0). Both in their offensive and defensive actions, the fencers produced more fixations to the armed hand and spent more time observing the armed hand in duels with a left-handed (vs. right-handed) opponent. In defence, it was also the guard that attracted more fixations and gained a longer observation time in bouts with a left hander. In duels with a right-handed opponent, a higher number of fixations in attack and in defence, and longer observation times in defence were found for the upper torso. The results may point to different perceptual strategies employed in bouts with left- vs. right-handed individuals. The findings from this study may help to promote the implementation of specialized perceptual training programmes in foil fencing.
Collapse
Affiliation(s)
| | - Ewa Tomczak
- Faculty of English, Adam Mickiewicz University, Poznań, Poland
| | - Maciej Łuczak
- Poznań University of Physical Education, Poznań, Poland
| | | | | |
Collapse
|
274
|
Henderson JM. Meaning and attention in scenes. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
275
|
Li K, Kadohisa M, Kusunoki M, Duncan J, Bundesen C, Ditlevsen S. Distinguishing between parallel and serial processing in visual attention from neurobiological data. ROYAL SOCIETY OPEN SCIENCE 2020; 7:191553. [PMID: 32218974 PMCID: PMC7029944 DOI: 10.1098/rsos.191553] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Accepted: 11/22/2019] [Indexed: 06/10/2023]
Abstract
Serial and parallel processing in visual search have been long debated in psychology, but the processing mechanism remains an open issue. Serial processing allows only one object at a time to be processed, whereas parallel processing assumes that various objects are processed simultaneously. Here, we present novel neural models for the two types of processing mechanisms based on analysis of simultaneously recorded spike trains using electrophysiological data from prefrontal cortex of rhesus monkeys while processing task-relevant visual displays. We combine mathematical models describing neuronal attention and point process models for spike trains. The same model can explain both serial and parallel processing by adopting different parameter regimes. We present statistical methods to distinguish between serial and parallel processing based on both maximum likelihood estimates and decoding the momentary focus of attention when two stimuli are presented simultaneously. Results show that both processing mechanisms are in play for the simultaneously recorded neurons, but neurons tend to follow parallel processing in the beginning after the onset of the stimulus pair, whereas they tend to serial processing later on.
Collapse
Affiliation(s)
- Kang Li
- Department of Mathematical Sciences, University of Copenhagen, Copenhagen, Denmark
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| | - Mikiko Kadohisa
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Makoto Kusunoki
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - John Duncan
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Claus Bundesen
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| | - Susanne Ditlevsen
- Department of Mathematical Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
276
|
Krekhov A, Cmentowski S, Waschk A, Kruger J. Deadeye Visualization Revisited: Investigation of Preattentiveness and Applicability in Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:547-557. [PMID: 31425106 DOI: 10.1109/tvcg.2019.2934370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visualizations rely on highlighting to attract and guide our attention. To make an object of interest stand out independently from a number of distractors, the underlying visual cue, e.g., color, has to be preattentive. In our prior work, we introduced Deadeye as an instantly recognizable highlighting technique that works by rendering the target object for one eye only. In contrast to prior approaches, Deadeye excels by not modifying any visual properties of the target. However, in the case of 2D visualizations, the method requires an additional setup to allow dichoptic presentation, which is a considerable drawback. As a follow-up to requests from the community, this paper explores Deadeye as a highlighting technique for 3D visualizations, because such stereoscopic scenarios support dichoptic presentation out of the box. Deadeye suppresses binocular disparities for the target object, so we cannot assume the applicability of our technique as a given fact. With this motivation, the paper presents quantitative evaluations of Deadeye in VR, including configurations with multiple heterogeneous distractors as an important robustness challenge. After confirming the preserved preattentiveness (all average accuracies above 90%) under such real-world conditions, we explore VR volume rendering as an example application scenario for Deadeye. We depict a possible workflow for integrating our technique, conduct an exploratory survey to demonstrate benefits and limitations, and finally provide related design implications.
Collapse
|
277
|
Wolfe JM. Forty years after feature integration theory: An introduction to the special issue in honor of the contributions of Anne Treisman. Atten Percept Psychophys 2020; 82:1-6. [PMID: 31950427 PMCID: PMC7039157 DOI: 10.3758/s13414-019-01966-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Jeremy M Wolfe
- Professor of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, 20115, USA.
- Visual Attention Lab, Department of Surgery Brigham & Women's Hospital, Cambridge, MA, 02139, USA.
| |
Collapse
|
278
|
Hitch GJ, Allen RJ, Baddeley AD. Attention and binding in visual working memory: Two forms of attention and two kinds of buffer storage. Atten Percept Psychophys 2020; 82:280-293. [PMID: 31420804 PMCID: PMC6994435 DOI: 10.3758/s13414-019-01837-x] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
We review our research on the episodic buffer in the multicomponent model of working memory (Baddeley, 2000), making explicit the influence of Anne Treisman's work on the way our research has developed. The crucial linking theme concerns binding, whereby the individual features of an episode are combined as integrated representations. We summarize a series of experiments on visual working memory that investigated the retention of feature bindings and individual features. The effects of cognitive load, perceptual distraction, prioritization, serial position, and their interactions form a coherent pattern. We interpret our findings as demonstrating contrasting roles of externally driven and internally driven attentional processes, as well as a distinction between visual buffer storage and the focus of attention. Our account has strong links with Treisman's concept of focused attention and aligns with a number of contemporary approaches to visual working memory.
Collapse
|
279
|
Masó-Puigdellosas A, Campos D, Méndez V. Transport properties of random walks under stochastic noninstantaneous resetting. Phys Rev E 2019; 100:042104. [PMID: 31770871 DOI: 10.1103/physreve.100.042104] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Indexed: 01/21/2023]
Abstract
Random walks with stochastic resetting provides a treatable framework to study interesting features about central-place motion. In this work, we introduce noninstantaneous resetting as a two-state model being a combination of an exploring state where the walker moves randomly according to a propagator and a returning state where the walker performs a ballistic motion with constant velocity towards the origin. We study the emerging transport properties for two types of reset time probability density functions (PDFs): exponential and Pareto. In the first case, we find the stationary distribution and a general expression for the stationary mean-square displacement (MSD) in terms of the propagator. We find that the stationary MSD may increase, decrease or remain constant with the returning velocity. This depends on the moments of the propagator. Regarding the Pareto resetting PDF we also study the stationary distribution and the asymptotic scaling of the MSD for diffusive motion. In this case, we see that the resetting modifies the transport regime, making the overall transport subdiffusive and even reaching a stationary MSD, i.e., a stochastic localization. This phenomena is also observed in diffusion under instantaneous Pareto resetting. We check the main results with stochastic simulations of the process.
Collapse
Affiliation(s)
- Axel Masó-Puigdellosas
- Grup de Física Estadística, Departament de Física, Facultat de Ciències, Edifici Cc, Universitat Autònoma de Barcelona, 08193 Bellaterra (Barcelona), Spain
| | - Daniel Campos
- Grup de Física Estadística, Departament de Física, Facultat de Ciències, Edifici Cc, Universitat Autònoma de Barcelona, 08193 Bellaterra (Barcelona), Spain
| | - Vicenç Méndez
- Grup de Física Estadística, Departament de Física, Facultat de Ciències, Edifici Cc, Universitat Autònoma de Barcelona, 08193 Bellaterra (Barcelona), Spain
| |
Collapse
|
280
|
Ramzaoui H, Faure S, Spotorno S. Alzheimer's Disease, Visual Search, and Instrumental Activities of Daily Living: A Review and a New Perspective on Attention and Eye Movements. J Alzheimers Dis 2019; 66:901-925. [PMID: 30400086 DOI: 10.3233/jad-180043] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Many instrumental activities of daily living (IADLs), like cooking and managing finances and medications, involve finding efficiently and in a timely manner one or several objects within complex environments. They may thus be disrupted by visual search deficits. These deficits, present in Alzheimer's disease (AD) from its early stages, arise from impairments in multiple attentional and memory mechanisms. A growing body of research on visual search in AD has examined several factors underlying search impairments in simple arrays. Little is known about how AD patients search in real-world scenes and in real settings, and about how such impairments affect patients' functional autonomy. Here, we review studies on visuospatial attention and visual search in AD. We then consider why analysis of patients' oculomotor behavior is promising to improve understanding of the specific search deficits in AD, and of their role in impairing IADL performance. We also highlight why paradigms developed in research on real-world scenes and real settings in healthy individuals are valuable to investigate visual search in AD. Finally, we indicate future research directions that may offer new insights to improve visual search abilities and autonomy in AD patients.
Collapse
Affiliation(s)
- Hanane Ramzaoui
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales, Université Côte d'Azur, France
| | - Sylvane Faure
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales, Université Côte d'Azur, France
| | - Sara Spotorno
- School of Psychology, University of Aberdeen, UK.,Institute of Neuroscience and Psychology, University of Glasgow, UK
| |
Collapse
|
281
|
Harada Y, Ohyama J. Spatiotemporal Characteristics of 360-Degree Basic Attention. Sci Rep 2019; 9:16083. [PMID: 31695051 PMCID: PMC6834598 DOI: 10.1038/s41598-019-52313-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 10/08/2019] [Indexed: 11/09/2022] Open
Abstract
The spatiotemporal characteristics of basic attention are important for understanding attending behaviours in real-life situations, and they are useful for evaluating the accessibility of visual information. However, although people are encircled by their 360-degree surroundings in real life, no study has addressed the general characteristics of attention to 360-degree surroundings. Here, we conducted an experiment using virtual reality technology to examine the spatiotemporal characteristics of attention in a highly controlled basic visual context consisting of a 360-degree surrounding. We measured response times and gaze patterns during the 360-degree search task and examined the spatial distribution of attention and its temporal variations in a 360-degree environment based on the participants' physical position. Data were collected from both younger adults and older adults to consider age-related differences. The results showed the fundamental spatiotemporal characteristics of 360-degree attention, which can be used as basic criteria to analyse the structure of exogenous effects on attention in complex 360-degree surroundings in real-life situations. For practical purposes, we created spherical criteria maps of 360-degree attention, which are useful for estimating attending behaviours to 360-degree environmental information or for evaluating visual information design in living environments, workspaces, or other real-life contexts.
Collapse
Affiliation(s)
- Yuki Harada
- Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Ibaraki, Japan
| | - Junji Ohyama
- Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Ibaraki, Japan.
| |
Collapse
|
282
|
Andersen E, Maier A. The attentional guidance of individual colours in increasingly complex displays. APPLIED ERGONOMICS 2019; 81:102885. [PMID: 31422277 DOI: 10.1016/j.apergo.2019.102885] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Revised: 06/03/2019] [Accepted: 06/28/2019] [Indexed: 06/10/2023]
Abstract
The use of colours is a prevalent and effective tool for improving design. Understanding the effect of colours on attention is crucial for designers that wish to understand how their interfaces will be used. Previous research has consistently shown that attention is biased towards colour. However, despite previous evidence indicating that colours should be treated individually, it has thus far not been investigated whether this difference is reflected in individual effects on attention. To address this, a visual search experiment was conducted that tested the attentional guidance of six individual colours (red,blue, green, yellow, orange, purple) in increasingly complex displays. Results showed that the individual colours differed significantly in their level of guidance of attention, and that these differences increased as the visual complexity of the display increased. Implications for visual design and future research on applying colour in visual attention research and design are discussed.
Collapse
Affiliation(s)
- Emil Andersen
- Technical University of Denmark, DTU Management, Engineering Systems Group, Diplomvej, Kgs, Lyngby, Denmark.
| | - Anja Maier
- Technical University of Denmark, DTU Management, Engineering Systems Group, Diplomvej, Kgs, Lyngby, Denmark
| |
Collapse
|
283
|
Krasovskaya S, MacInnes WJ. Salience Models: A Computational Cognitive Neuroscience Review. Vision (Basel) 2019; 3:E56. [PMID: 31735857 PMCID: PMC6969943 DOI: 10.3390/vision3040056] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 10/12/2019] [Accepted: 10/22/2019] [Indexed: 11/21/2022] Open
Abstract
The seminal model by Laurent Itti and Cristoph Koch demonstrated that we can compute the entire flow of visual processing from input to resulting fixations. Despite many replications and follow-ups, few have matched the impact of the original model-so what made this model so groundbreaking? We have selected five key contributions that distinguish the original salience model by Itti and Koch; namely, its contribution to our theoretical, neural, and computational understanding of visual processing, as well as the spatial and temporal predictions for fixation distributions. During the last 20 years, advances in the field have brought up various techniques and approaches to salience modelling, many of which tried to improve or add to the initial Itti and Koch model. One of the most recent trends has been to adopt the computational power of deep learning neural networks; however, this has also shifted their primary focus to spatial classification. We present a review of recent approaches to modelling salience, starting from direct variations of the Itti and Koch salience model to sophisticated deep-learning architectures, and discuss the models from the point of view of their contribution to computational cognitive neuroscience.
Collapse
Affiliation(s)
- Sofia Krasovskaya
- Vision Modelling Laboratory, Faculty of Social Science, National Research University Higher School of Economics, 101000 Moscow, Russia
- School of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| | - W. Joseph MacInnes
- Vision Modelling Laboratory, Faculty of Social Science, National Research University Higher School of Economics, 101000 Moscow, Russia
- School of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| |
Collapse
|
284
|
He C, Cheung OS. Category selectivity for animals and man-made objects: Beyond low- and mid-level visual features. J Vis 2019; 19:22. [DOI: 10.1167/19.12.22] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Chenxi He
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Olivia S. Cheung
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| |
Collapse
|
285
|
Long B, Moher M, Carey SE, Konkle T. Animacy and object size are reflected in perceptual similarity computations by the preschool years. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1664689] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Bria Long
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Mariko Moher
- Department of Psychology, Williams College, Williamstown, MA, USA
| | - Susan E. Carey
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
286
|
Geng JJ, Witkowski P. Template-to-distractor distinctiveness regulates visual search efficiency. Curr Opin Psychol 2019; 29:119-125. [PMID: 30743200 PMCID: PMC6625942 DOI: 10.1016/j.copsyc.2019.01.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 12/13/2018] [Accepted: 01/04/2019] [Indexed: 11/18/2022]
Abstract
All models of attention include the concept of an attentional template (or a target or search template). The template is conceptualized as target information held in memory that is used for prioritizing sensory processing and determining if an object matches the target. It is frequently assumed that the template contains a veridical copy of the target. However, we review recent evidence showing that the template encodes a version of the target that is adapted to the current context (e.g. distractors, task, etc.); information held within the template may include only a subset of target features, real world knowledge, pre-existing perceptual biases, or even be a distorted version of the veridical target. We argue that the template contents are customized in order to maximize the ability to prioritize information that distinguishes targets from distractors. We refer to this as template-to-distractor distinctiveness and hypothesize that it contributes to visual search efficiency by exaggerating target-to-distractor dissimilarity.
Collapse
Affiliation(s)
- Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States.
| | - Phillip Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States
| |
Collapse
|
287
|
Liesefeld HR, Müller HJ. Distractor handling via dimension weighting. Curr Opin Psychol 2019; 29:160-167. [DOI: 10.1016/j.copsyc.2019.03.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 12/03/2018] [Accepted: 03/07/2019] [Indexed: 10/27/2022]
|
288
|
Koolen R. On Visually-Grounded Reference Production: Testing the Effects of Perceptual Grouping and 2D/3D Presentation Mode. Front Psychol 2019; 10:2247. [PMID: 31632326 PMCID: PMC6781859 DOI: 10.3389/fpsyg.2019.02247] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 09/19/2019] [Indexed: 11/18/2022] Open
Abstract
When referring to a target object in a visual scene, speakers are assumed to consider certain distractor objects to be more relevant than others. The current research predicts that the way in which speakers come to a set of relevant distractors depends on how they perceive the distance between the objects in the scene. It reports on the results of two language production experiments, in which participants referred to target objects in photo-realistic visual scenes. Experiment 1 manipulated three factors that were expected to affect perceived distractor distance: two manipulations of perceptual grouping (region of space and type similarity), and one of presentation mode (2D vs. 3D). In line with most previous research on visually-grounded reference production, an offline measure of visual attention was taken here: the occurrence of overspecification with color. The results showed effects of region of space and type similarity on overspecification, suggesting that distractors that are perceived as being in the same group as the target are more often considered relevant distractors than distractors in a different group. Experiment 2 verified this suggestion with a direct measure of visual attention, eye tracking, and added a third manipulation of grouping: color similarity. For region of space in particular, the eye movements data indeed showed patterns in the expected direction: distractors within the same region as the target were fixated more often, and longer, than distractors in a different region. Color similarity was found to affect overspecification with color, but not gaze duration or the number of distractor fixations. Also the expected effects of presentation mode (2D vs. 3D) were not convincingly borne out by the data. Taken together, these results provide direct evidence for the close link between scene perception and language production, and indicate that perceptual grouping principles can guide speakers in determining the distractor set during reference production.
Collapse
Affiliation(s)
- Ruud Koolen
- Tilburg Center for Cognition and Communication, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
289
|
Gayet S, Peelen MV. Scenes Modulate Object Processing Before Interacting With Memory Templates. Psychol Sci 2019; 30:1497-1509. [PMID: 31525114 PMCID: PMC6787763 DOI: 10.1177/0956797619869905] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 07/15/2019] [Indexed: 11/26/2022] Open
Abstract
When searching for relevant objects in our environment (say, an apple), we create a memory template (a red sphere), which causes our visual system to favor template-matching visual input (applelike objects) at the expense of template-mismatching visual input (e.g., leaves). Although this principle seems straightforward in a lab setting, it poses a problem in naturalistic viewing: Two objects that have the same size on the retina will differ in real-world size if one is nearby and the other is far away. Using the Ponzo illusion to manipulate perceived size while keeping retinal size constant, we demonstrated across 71 participants that visual objects attract attention when their perceived size matches a memory template, compared with mismatching objects that have the same size on the retina. This shows that memory templates affect visual selection after object representations are modulated by scene context, thus providing a working mechanism for template-based search in naturalistic vision.
Collapse
Affiliation(s)
- Surya Gayet
- Donders Institute for Brain, Cognition and
Behaviour, Radboud University
| | - Marius V. Peelen
- Donders Institute for Brain, Cognition and
Behaviour, Radboud University
| |
Collapse
|
290
|
Wolfe JM, Utochkin IS. What is a preattentive feature? Curr Opin Psychol 2019; 29:19-26. [PMID: 30472539 PMCID: PMC6513732 DOI: 10.1016/j.copsyc.2018.11.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/01/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022]
Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Corresponding author Visual Attention Lab, Department
of Surgery, Brigham & Women's Hospital, Departments of Ophthalmology
and Radiology, Harvard Medical School, 64 Sidney St. Suite. 170, Cambridge, MA
02139-4170,
| | - Igor S Utochkin
- National Research University Higher School of
Economics, Moscow, Russian Federation Address: 101000, Armyansky per. 4, Moscow,
Russian Federation,
| |
Collapse
|
291
|
Capozzi F, Human LJ, Ristic J. Attention promotes accurate impression formation. J Pers 2019; 88:544-554. [PMID: 31482574 DOI: 10.1111/jopy.12509] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 06/26/2019] [Accepted: 08/27/2019] [Indexed: 11/30/2022]
Abstract
OBJECTIVE An ability to form accurate impressions of others is vital for adaptive social behavior in humans. Here, we examined if attending to persons more is associated with greater accuracy in personality impressions. METHOD We asked 42 observers (36 females; mean age = 21 years, age range = 18-28; expected power = 0.96) to form personality impressions of unacquainted individuals (i.e., targets) from video interviews while their attentional behavior was assessed using eye tracking. We examined whether (a) attending more to targets benefited accuracy, (b) attending to specific body parts (e.g., face vs. body) drove this association, and (c) targets' ease of personality readability modulated these effects. RESULTS Paying more attention to a target was associated with forming more accurate personality impressions. Attention to the whole person contributed to this effect, with this association occurring independently of targets' ease of readability. CONCLUSIONS These findings show that attending more to a person is associated with increased accuracy and thus suggest that attention promotes social adaption by supporting accurate social perception.
Collapse
Affiliation(s)
- Francesca Capozzi
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| | - Lauren J Human
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| | - Jelena Ristic
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
292
|
Van de Weijgert M, Van der Burg E, Donk M. Attentional guidance varies with display density. Vision Res 2019; 164:1-11. [PMID: 31401217 DOI: 10.1016/j.visres.2019.08.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/23/2019] [Accepted: 08/01/2019] [Indexed: 10/26/2022]
Abstract
The aim of the present study was to investigate how display density affects attentional guidance in heterogeneous search displays. In Experiment 1 we presented observers with heterogeneous sparse and dense search displays which were adaptively changed over the course of the experiment using genetic algorithms. We generated random displays, and based upon fastest search times, the displays that allowed most efficient search were selected to generate new displays for the next generations, thus revealing which properties facilitated or inhibited target search across display densities. The results showed that the prevalence of distractors sharing the target color was substantially reduced over generations in sparse displays. Dense displays also evolved to contain less distractors sharing the target color but only when the orientation of the distractors resembled the target orientation. More importantly, spatial analyses revealed that changes across generations occurred across all areas in sparse displays but were confined to occur around the target location only in dense displays. In Experiment 2, in which we used a factorial design, we showed that the presence of potentially interfering distractors in the target area affected search in dense displays but not in sparse displays. Together the results suggest that the role of salience-driven attentional guidance is larger in dense than sparse displays even in the absence of display homogeneity.
Collapse
Affiliation(s)
- Marlies Van de Weijgert
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Faculty of Engineering, Design and Computing, Inholland University of Applied Sciences, Delft, the Netherlands.
| | - Erik Van der Burg
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; School of Psychology, University of Sydney, Sydney, Australia
| | - Mieke Donk
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
293
|
|
294
|
Thornton IM, de’Sperati C, Kristjánsson Á. The influence of selection modality, display dynamics and error feedback on patterns of human foraging. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1658001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ian M. Thornton
- Department of Cognitive Science, Faculty of Media and Knowledge Sciences, University of Malta, Msida, Malta
| | - Claudio de’Sperati
- Laboratory of Action, Perception and Cognition, Università Vita-Salute San Raffaele, Milano, Italy
- Experimental Psychology Unit, Division of Neuroscience, San Raffaele Scientific Institute, Milano, Italy
| | - Árni Kristjánsson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
- School of Psychology, National Research University, Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
295
|
Liu Y. Visual search characteristics of precise map reading by orienteers. PeerJ 2019; 7:e7592. [PMID: 31497408 PMCID: PMC6709663 DOI: 10.7717/peerj.7592] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 07/30/2019] [Indexed: 11/25/2022] Open
Abstract
This article compares the differences in eye movements between orienteers of different skill levels on map information searches and explores the visual search patterns of orienteers during precise map reading so as to explore the cognitive characteristics of orienteers' visual search. We recruited 44 orienteers at different skill levels (experts, advanced beginners, and novices), and recorded their behavioral responses and eye movement data when reading maps of different complexities. We found that the complexity of map (complex vs. simple) affects the quality of orienteers' route planning during precise map reading. Specifically, when observing complex maps, orienteers of higher competency tend to have a better quality of route planning (i.e., a shorter route planning time, a longer gaze time, and a more concentrate distribution of gazes). Expert orienteers demonstrated obvious cognitive advantages in the ability to find key information. We also found that in the stage of route planning, expert orienteers and advanced beginners first pay attention to the checkpoint description table. The expert group extracted information faster, and their attention was more concentrated, whereas the novice group paid less attention to the checkpoint description table, and their gaze was scattered. We found that experts regarded the information in the checkpoint description table as the key to the problem and they give priority to this area in route decision making. These results advance our understanding of professional knowledge and problem solving in orienteering.
Collapse
Affiliation(s)
- Yang Liu
- Department of Physical Education, Shaanxi Normal University, Xi’an, Shaanxi, China
| |
Collapse
|
296
|
|
297
|
|
298
|
Yoshimura N, Yonemitsu F, Marmolejo-Ramos F, Ariga A, Yamada Y. Task Difficulty Modulates the Disrupting Effects of Oral Respiration on Visual Search Performance. J Cogn 2019; 2:21. [PMID: 31517239 PMCID: PMC6676927 DOI: 10.5334/joc.77] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 07/05/2019] [Indexed: 01/11/2023] Open
Abstract
Previous research has suggested that oral respiration may disturb cognitive function and health. The present study investigated whether oral respiration negatively affects visual attentional processing during a visual search task. Participants performed a visual search task in the following three breathing conditions: wearing a nasal plug, wearing surgical tape over their mouths, or no modification (oral vs. nasal vs. control). The participants searched for a target stimulus within different set sizes of distractors in three search conditions (orientation vs colour vs conjunction). Experiment 1 did not show any effect due to respiration. Experiment 2 rigorously manipulated the search efficiency and found that participants required more time to find a poorly discriminable target during oral breathing compared with other breathing styles, which was due to the heightened intercept under this condition. Because the intercept is an index of pre-search sensory processing or motor response in visual search, such cognitive processing was likely disrupted by oral respiration. These results suggest that oral respiration and attentional processing during inefficient visual search share a common cognitive resource.
Collapse
Affiliation(s)
- Naoto Yoshimura
- Graduate School of Human-Environment Studies, Kyushu University, JP
- Japan Society for the Promotion of Science, Tokyo, JP
| | - Fumiya Yonemitsu
- Graduate School of Human-Environment Studies, Kyushu University, JP
- Japan Society for the Promotion of Science, Tokyo, JP
| | | | - Atsunori Ariga
- Graduate School of Integrated Arts and Sciences, Hiroshima University, JP
| | - Yuki Yamada
- Faculty of Arts and Science, Kyushu University, JP
| |
Collapse
|
299
|
Hansmann-Roth S, Chetverikov A, Kristjánsson Á. Representing color and orientation ensembles: Can observers learn multiple feature distributions? J Vis 2019; 19:2. [DOI: 10.1167/19.9.2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Sabrina Hansmann-Roth
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Andrey Chetverikov
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
- Cognitive Research Lab, Russian Academy of National Economy and Public Administration, Moscow, Russia
| | - Árni Kristjánsson
- Icelandic Vision Lab, School of Health Sciences, University of Iceland, Reykjavík, Iceland
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
300
|
Prpic V, Kniestedt I, Camilleri E, Maureira MG, Kristjánsson Á, Thornton IM. A serious game to explore human foraging in a 3D environment. PLoS One 2019; 14:e0219827. [PMID: 31344063 PMCID: PMC6657838 DOI: 10.1371/journal.pone.0219827] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Accepted: 07/02/2019] [Indexed: 11/18/2022] Open
Abstract
Traditional search tasks have taught us much about vision and attention. Recently, several groups have begun to use multiple-target search to explore more complex and temporally extended "foraging" behaviour. Many of these new foraging tasks, however, maintain the simplified 2D displays and response demands associated with traditional, single-target visual search. In this respect, they may fail to capture important aspects of real-world search or foraging behaviour. In the current paper, we present a serious game for mobile platforms, developed in Unity3D, in which human participants play the role of an animal foraging for food in a simulated 3D environment. Game settings can be adjusted, so that, for example, custom target and distractor items can be uploaded, and task parameters, such as the number of target categories or target/distractor ratio are all easy to modify. We are also making the Unity3D project available, so that further modifications can also be made. We demonstrate how the app can be used to address specific research questions by conducting two human foraging experiments. Our results indicate that in this 3D environment, a standard feature/conjunction manipulation does not lead to a reduction in foraging runs, as it is known to do in simple, 2D foraging tasks.
Collapse
Affiliation(s)
- Valter Prpic
- Institute for Psychological Science, Faculty of Health and Life Sciences, De Montfort University, Leicester, United Kingdom
| | | | | | | | - Árni Kristjánsson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Oddi v. Sturlugötu, Reykjavik, Iceland
- School of Psychology, National Research University, Higher School of Economics, Moscow, Russian Federation
| | - Ian M. Thornton
- Department of Cognitive Science, Faculty of Media and Knowledge Sciences, University of Malta, Msida, MSD, Malta
| |
Collapse
|