1
|
Nartker M, Firestone C, Egeth H, Phillips I. Six ways of failing to see (and why the differences matter). Iperception 2023; 14:20416695231198762. [PMID: 37781486 PMCID: PMC10536858 DOI: 10.1177/20416695231198762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 08/17/2023] [Indexed: 10/03/2023] Open
Abstract
Sometimes we look but fail to see: our car keys on a cluttered desk, a repeated word in a carefully proofread email, or a motorcycle at an intersection. Wolfe and colleagues present a unifying, mechanistic framework for understanding these "Looked But Failed to See" errors, explaining how such misses arise from natural constraints on human visual processing. Here, we offer a conceptual taxonomy of six distinct ways we might be said to fail to see, and explore: how these relate to processes in Wolfe et al.'s model; how they can be distinguished experimentally; and, why the differences matter.
Collapse
Affiliation(s)
- Makaela Nartker
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA;
Department of Philosophy, Johns Hopkins University, Baltimore, MD, USA
| | - Howard Egeth
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Ian Phillips
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
- Department of Philosophy, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
2
|
Lepori MA, Firestone C. Can You Hear Me
Now
? Sensitive Comparisons of Human and Machine Perception. Cogn Sci 2022; 46:e13191. [DOI: 10.1111/cogs.13191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 06/07/2022] [Accepted: 06/26/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Michael A. Lepori
- Department of Psychological & Brain Sciences Johns Hopkins University
| | - Chaz Firestone
- Department of Psychological & Brain Sciences Johns Hopkins University
| |
Collapse
|
3
|
Modeling mean estimation tasks in within-trial and across-trial contexts. Atten Percept Psychophys 2022; 84:2384-2407. [PMID: 35199324 DOI: 10.3758/s13414-021-02410-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/08/2021] [Indexed: 11/08/2022]
Abstract
The mean estimation task, which explicitly asks observers to estimate the mean feature value of multiple stimuli, is a fundamental paradigm in research areas such as ensemble coding and cue integration. The current study uses computational models to formalize how observers summarize information in mean estimation tasks. We compare model predictions from our Fidelity-based Integration Model (FIM) and other models on their ability to simulate observed patterns in within-trial weight distribution, across-trial information integration, and set-size effects on mean estimation accuracy. Experiments show non-equal weighting within trials in both sequential and simultaneous mean estimation tasks. Observers implicitly overestimated trial means below the global mean and underestimated trial means above the global mean. Mean estimation performance declined and stabilized with increasing set sizes. FIM successfully simulated all observed patterns, while other models failed. FIM's information sampling structure provides a new way to interpret the capacity limit in visual working memory and sub-sampling strategies. As a model framework, FIM offers task-dependent modeling for various ensemble coding paradigms, facilitating research synthesis across different studies in the literature.
Collapse
|
4
|
Stewart EEM, Ludwig CJH, Schütz AC. Humans represent the precision and utility of information acquired across fixations. Sci Rep 2022; 12:2411. [PMID: 35165336 PMCID: PMC8844410 DOI: 10.1038/s41598-022-06357-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 01/27/2022] [Indexed: 11/28/2022] Open
Abstract
Our environment contains an abundance of objects which humans interact with daily, gathering visual information using sequences of eye-movements to choose which object is best-suited for a particular task. This process is not trivial, and requires a complex strategy where task affordance defines the search strategy, and the estimated precision of the visual information gathered from each object may be used to track perceptual confidence for object selection. This study addresses the fundamental problem of how such visual information is metacognitively represented and used for subsequent behaviour, and reveals a complex interplay between task affordance, visual information gathering, and metacogntive decision making. People fixate higher-utility objects, and most importantly retain metaknowledge about how much information they have gathered about these objects, which is used to guide perceptual report choices. These findings suggest that such metacognitive knowledge is important in situations where decisions are based on information acquired in a temporal sequence.
Collapse
Affiliation(s)
- Emma E M Stewart
- Department of Experimental Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Str. 10F, 35394, Giessen, Germany.
| | | | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
- Center for Mind, Brain and Behaviour, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
5
|
Finding meaning in "wrong responses": The multiple object-awareness paradigm shows that visual awareness is probabilistic. Atten Percept Psychophys 2022; 84:553-559. [PMID: 34988905 DOI: 10.3758/s13414-021-02398-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2021] [Indexed: 11/08/2022]
Abstract
Visual information that observers perceive and remember at any given moment guides behavior in daily life. However, binary alternative-forced choice responses, often used in visual research, limit the report of the visual information that observers perceive and remember. We used a new multiple object-awareness paradigm where observers can use multiple clicks to find a target. We calculated visual awareness capacity based on the first-attempt accuracy and the total number of clicks, respectively. Results showed that the capacity estimated by the clicks in guessing from N was significantly greater than that estimated by the first-attempt accuracy. Further, analysis found that if observers could not locate the target in their first attempt, they were more likely to click closer to the target or on stimuli that matched its color. In addition, we found that even when observers used the same number of clicks to find a target (2 or 3), the average distance was shorter when observers reported high-level subjective visibility. The findings are compatible with the partial awareness hypothesis, and the visual ensembles and summary statistics hypothesis, which hold that visual awareness is probabilistic. These results also support the visual short-term memory models where many items are stored but with a resolution or noise level that depends on the number of items in memory.
Collapse
|
6
|
Sklar AY, Kardosh R, Hassin RR. From non-conscious processing to conscious events: a minimalist approach. Neurosci Conscious 2021; 2021:niab026. [PMID: 34676105 PMCID: PMC8524171 DOI: 10.1093/nc/niab026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 06/23/2021] [Accepted: 10/11/2021] [Indexed: 01/22/2023] Open
Abstract
The minimalist approach that we develop here is a framework that allows to appreciate how non-conscious processing and conscious contents shape human cognition, broadly defined. It is composed of three simple principles. First, cognitive processes are inherently non-conscious, while their inputs and (interim) outputs may be consciously experienced. Second, non-conscious processes and elements of the cognitive architecture prioritize information for conscious experiences. Third, conscious events are composed of series of conscious contents and non-conscious processes, with increased duration leading to more opportunity for processing. The narrowness of conscious experiences is conceptualized here as a solution to the problem of channeling the plethora of non-conscious processes into action and communication processes that are largely serial. The framework highlights the importance of prioritization for consciousness, and we provide an illustrative review of three main factors that shape prioritization-stimulus strength, motivational relevance and mental accessibility. We further discuss when and how this framework (i) is compatible with previous theories, (ii) enables new understandings of established findings and models, and (iii) generates new predictions and understandings.
Collapse
Affiliation(s)
- Asael Y Sklar
- Edmond & Lily Safra Center for Brain Sciences, The Hebrew University Edmond J. Safra Campus, Jerusalem 9190401, Israel
| | - Rasha Kardosh
- Psychology Department, The Hebrew University Mount Scopus, Jerusalem 91905, Israel
| | - Ran R Hassin
- James Marshall Chair of Psychology, Psychology Department & The Federmann Center for the Study of Rationality, The Hebrew University Mount Scopus, Jerusalem 91905, Israel
| |
Collapse
|
7
|
Sklar AY, Goldstein AY, Abir Y, Goldstein A, Dotsch R, Todorov A, Hassin RR. Did you see it? Robust individual differences in the speed with which meaningful visual stimuli break suppression. Cognition 2021; 211:104638. [PMID: 33740538 DOI: 10.1016/j.cognition.2021.104638] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 02/14/2021] [Accepted: 02/15/2021] [Indexed: 10/21/2022]
Abstract
Perceptual conscious experiences result from non-conscious processes that precede them. We document a new characteristic of the cognitive system: the speed with which visual meaningful stimuli are prioritized to consciousness over competing noise in visual masking paradigms. In ten experiments (N = 399) we find that an individual's non-conscious visual prioritization speed (NVPS) is ubiquitous across a wide variety of stimuli, and generalizes across visual masks, suppression tasks, and time. We also find that variation in NVPS is unique, in that it cannot be explained by variation in general speed, perceptual decision thresholds, short-term visual memory, or three networks of attention (alerting, orienting and executive). Finally, we find that NVPS is correlated with subjective measures of sensitivity, as they are measured by the Highly Sensitive Person scale. We conclude by discussing the implications of variance in NVPS for understanding individual variance in behavior and the neural substrates of consciousness.
Collapse
Affiliation(s)
- Asael Y Sklar
- Edmond & Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Ariel Y Goldstein
- Princeton Institute of Neuroscience, Princeton University, Princeton, NJ, USA
| | - Yaniv Abir
- Department of Psychology, Columbia University, NY, USA
| | - Alon Goldstein
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | | | | | - Ran R Hassin
- James Marshall Chair of Psychology, Department of Psychology and The Federmann Center for the Study of Rationality, The Hebrew University of Jerusalem, Jerusalem, Israel.
| |
Collapse
|
8
|
Kamkar S, Abrishami Moghaddam H, Lashgari R, Oksama L, Li J, Hyönä J. Effectiveness of "rescue saccades" on the accuracy of tracking multiple moving targets: An eye-tracking study on the effects of target occlusions. J Vis 2020; 20:5. [PMID: 33196768 PMCID: PMC7671859 DOI: 10.1167/jov.20.12.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Occlusion is one of the main challenges in tracking multiple moving objects. In almost all real-world scenarios, a moving object or a stationary obstacle occludes targets partially or completely for a short or long time during their movement. A previous study (Zelinsky & Todor, 2010) reported that subjects make timely saccades toward the object in danger of being occluded. Observers make these so-called “rescue saccades” to prevent target swapping. In this study, we examined whether these saccades are helpful. To this aim, we used as the stimuli recorded videos from natural movement of zebrafish larvae swimming freely in a circular container. We considered two main types of occlusion: object-object occlusions that naturally exist in the videos, and object-occluder occlusions created by adding a stationary doughnut-shape occluder in some videos. Four different scenarios were studied: (1) no occlusions, (2) only object-object occlusions, (3) only object-occluder occlusion, or (4) both object-object and object-occluder occlusions. For each condition, two set sizes (two and four) were applied. Participants’ eye movements were recorded during tracking, and rescue saccades were extracted afterward. The results showed that rescue saccades are helpful in handling object-object occlusions but had no reliable effect on tracking through object-occluder occlusions. The presence of occlusions generally increased visual sampling of the scenes; nevertheless, tracking accuracy declined due to occlusion.
Collapse
Affiliation(s)
- Shiva Kamkar
- Machine Vision and Medical Image Processing Laboratory, Faculty of Electrical and Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran.,
| | - Hamid Abrishami Moghaddam
- Machine Vision and Medical Image Processing Laboratory, Faculty of Electrical and Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran.,
| | - Reza Lashgari
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran.,
| | - Lauri Oksama
- Finnish Defence Research Agency, Human Performance Division, Järvenpää, Finland.,
| | - Jie Li
- Institutes of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,
| | - Jukka Hyönä
- Department of Psychology, University of Turku, Turku, Finland.,
| |
Collapse
|
9
|
Kosovicheva A, Alaoui-Soce A, Wolfe JM. Looking ahead: When do you find the next item in foraging visual search? J Vis 2020; 20:3. [PMID: 32040162 PMCID: PMC7343403 DOI: 10.1167/jov.20.2.3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Many real-world visual tasks involve searching for multiple instances of a target (e.g., picking ripe berries). What strategies do observers use when collecting items in this type of search? Do they wait to finish collecting the current item before starting to look for the next target, or do they search ahead for future targets? We utilized behavioral and eye-tracking measures to distinguish between these two possibilities in foraging search. Experiment 1 used a color wheel technique in which observers searched for T shapes among L shapes while all items independently cycled through a set of colors. Trials were abruptly terminated, and observers reported both the color and location of the next target that they intended to click. Using observers’ color reports to infer target-finding times, we demonstrate that observers found the next item before the time of the click on the current target. We validated these results in Experiment 2 by recording fixation locations around the time of each click. Experiment 3 utilized a different procedure, in which all items were intermittently occluded during the trial. We then calculated a distribution of when targets were visible around the time of each click, allowing us to infer when they were most likely found. In a fourth and final experiment, observers indicated the locations of multiple future targets after the search was abruptly terminated. Together, our results provide converging evidence to demonstrate that observers can find the next target before collecting the current target and can typically forage one to two items ahead.
Collapse
|
10
|
Kamkar S, Ghezloo F, Moghaddam HA, Borji A, Lashgari R. Multiple-target tracking in human and machine vision. PLoS Comput Biol 2020; 16:e1007698. [PMID: 32271746 PMCID: PMC7144962 DOI: 10.1371/journal.pcbi.1007698] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Humans are able to track multiple objects at any given time in their daily activities—for example, we can drive a car while monitoring obstacles, pedestrians, and other vehicles. Several past studies have examined how humans track targets simultaneously and what underlying behavioral and neural mechanisms they use. At the same time, computer-vision researchers have proposed different algorithms to track multiple targets automatically. These algorithms are useful for video surveillance, team-sport analysis, video analysis, video summarization, and human–computer interaction. Although there are several efficient biologically inspired algorithms in artificial intelligence, the human multiple-target tracking (MTT) ability is rarely imitated in computer-vision algorithms. In this paper, we review MTT studies in neuroscience and biologically inspired MTT methods in computer vision and discuss the ways in which they can be seen as complementary. Multiple-target tracking (MTT) is a challenging task vital for both a human’s daily life and for many artificial intelligent systems, such as those used for urban traffic control. Neuroscientists are interested in discovering the underlying neural mechanisms that successfully exploit cognitive resources, e.g., spatial attention or memory, during MTT. Computer-vision specialists aim to develop powerful MTT algorithms based on advanced models or data-driven computational methods. In this paper, we review MTT studies from both communities and discuss how findings from cognitive studies can inspire developers to construct higher performing MTT algorithms. Moreover, some directions have been proposed through which MTT algorithms could raise new questions in the cognitive science domain, and answering them can shed light on neural processes underlying MTT.
Collapse
Affiliation(s)
- Shiva Kamkar
- Machine Vision and Medical Image Processing Laboratory, Faculty of Electrical and Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran
- Brain Engineering Research Center, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Fatemeh Ghezloo
- Brain Engineering Research Center, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Hamid Abrishami Moghaddam
- Machine Vision and Medical Image Processing Laboratory, Faculty of Electrical and Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran
- * E-mail: (RL); (HAM)
| | - Ali Borji
- HCL America, Manhattan, New York City, United States of America
| | - Reza Lashgari
- Brain Engineering Research Center, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- * E-mail: (RL); (HAM)
| |
Collapse
|
11
|
Neuronal correlates of full and partial visual conscious perception. Conscious Cogn 2019; 78:102863. [PMID: 31887533 DOI: 10.1016/j.concog.2019.102863] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 12/17/2019] [Accepted: 12/18/2019] [Indexed: 11/20/2022]
Abstract
Stimuli may induce only partial consciousness-an intermediate between null and full consciousness-where the presence but not identity of an object can be reported. The differences in the neuronal basis of full and partial consciousness are poorly understood. We investigated if evoked and oscillatory activity could dissociate full from partial conscious perception. We recorded human cortical activity with magnetoencephalography (MEG) during a visual perception task in which stimulus could be either partially or fully perceived. Partial consciousness was associated with an early increase in evoked activity and theta/low-alpha-band oscillations while full consciousness was also associated with late evoked activity and beta-band oscillations. Full from partial consciousness was dissociated by stronger evoked activity and late increase in theta oscillations that were localized to higher-order visual regions and posterior parietal and prefrontal cortices. Our results reveal both evoked activity and theta oscillations dissociate partial and full consciousness.
Collapse
|
12
|
Cohen MA. What Is the True Capacity of Visual Cognition? Trends Cogn Sci 2018; 23:83-86. [PMID: 30573400 DOI: 10.1016/j.tics.2018.12.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 12/05/2018] [Accepted: 12/06/2018] [Indexed: 11/17/2022]
Abstract
How much can we perceive and remember at a time? Results from various paradigms traditionally show that observers are aware of surprisingly little of the world around them. However, a recent study by Wu and Wolfe (Curr. Biol. 2018;28:3430-3434) uses a novel technique to reveal that observers have more knowledge of the visual world than previously believed.
Collapse
Affiliation(s)
- Michael A Cohen
- Amherst College, Department of Psychology and Program in Neuroscience, 220 South Pleasant St., Amherst, MA 01002, USA.
| |
Collapse
|