1
|
Quillien T. Rational information search in welfare-tradeoff cognition. Cognition 2023; 231:105317. [PMID: 36434941 DOI: 10.1016/j.cognition.2022.105317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 08/23/2022] [Accepted: 10/25/2022] [Indexed: 11/27/2022]
Abstract
One of the most important dimensions along which we evaluate others is their propensity to value our welfare: we like people who are disposed to incur costs for our benefit and who refrain from imposing costs on us to benefit themselves. The evolutionary importance of social valuation in our species suggests that humans have cognitive mechanisms that are able to efficiently extract information about how much another person values them. Here I test the hypothesis that people are spontaneously interested in the kinds of events that have the most potential to reveal such information. In two studies, I presented participants (Ns = 216; 300) with pairs of dilemmas that another individual faced in an economic game; for each pair, I asked them to choose the dilemma for which they would most like to see the decision that the individual had made. On average, people spontaneously selected the choices that had the potential to reveal the most information about the individual's valuation of the participant, as quantified by a Bayesian ideal search model. This finding suggests that human cooperation is supported by sophisticated cognitive mechanisms for information-gathering.
Collapse
Affiliation(s)
- Tadeg Quillien
- School of Informatics, University of Edinburgh, United Kingdom.
| |
Collapse
|
2
|
Barnes J, Blair MR, Walshe RC, Tupper PF. LAG-1: A dynamic, integrative model of learning, attention, and gaze. PLoS One 2022; 17:e0259511. [PMID: 35298465 PMCID: PMC8929614 DOI: 10.1371/journal.pone.0259511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 10/21/2021] [Indexed: 11/19/2022] Open
Abstract
It is clear that learning and attention interact, but it is an ongoing challenge to integrate their psychological and neurophysiological descriptions. Here we introduce LAG-1, a dynamic neural field model of learning, attention and gaze, that we fit to human learning and eye-movement data from two category learning experiments. LAG-1 comprises three control systems: one for visuospatial attention, one for saccadic timing and control, and one for category learning. The model is able to extract a kind of information gain from pairwise differences in simple associations between visual features and categories. Providing this gain as a reentrant signal with bottom-up visual information, and in top-down spatial priority, appropriately influences the initiation of saccades. LAG-1 provides a moment-by-moment simulation of the interactions of learning and gaze, and thus simultaneously produces phenomena on many timescales, from the duration of saccades and gaze fixations, to the response times for trials, to the slow optimization of attention toward task relevant information across a whole experiment. With only three free parameters (learning rate, trial impatience, and fixation impatience) LAG-1 produces qualitatively correct fits for learning, behavioural timing and eye movement measures, and also for previously unmodelled empirical phenomena (e.g., fixation orders showing stimulus-specific attention, and decreasing fixation counts during feedback). Because LAG-1 is built to capture attention and gaze generally, we demonstrate how it can be applied to other phenomena of visual cognition such as the free viewing of visual stimuli, visual search, and covert attention.
Collapse
Affiliation(s)
- Jordan Barnes
- Department of Psychology, Simon Fraser University, Burnaby, BC, Canada
| | - Mark R. Blair
- Department of Psychology, Simon Fraser University, Burnaby, BC, Canada
- * E-mail:
| | - R. Calen Walshe
- Center for Perceptual Systems, University of Texas, Austin, Texas, United States of America
| | - Paul F. Tupper
- Department of Mathematics, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
3
|
Grigorenko EL, Love BC. Bidirectional influences of information sampling and concept learning. Psychol Rev 2022; 129:213-234. [PMID: 34279981 PMCID: PMC8766620 DOI: 10.1037/rev0000287] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Contemporary models of categorization typically tend to sidestep the problem of how information is initially encoded during decision making. Instead, a focus of this work has been to investigate how, through selective attention, stimulus representations are "contorted" such that behaviorally relevant dimensions are accentuated (or "stretched"), and the representations of irrelevant dimensions are ignored (or "compressed"). In high-dimensional real-world environments, it is computationally infeasible to sample all available information, and human decision makers selectively sample information from sources expected to provide relevant information. To address these and other shortcomings, we develop an active sampling model, Sampling Emergent Attention (SEA), which sequentially and strategically samples information sources until the expected cost of information exceeds the expected benefit. The model specifies the interplay of two components, one involved in determining the expected utility of different information sources and the other in representing knowledge and beliefs about the environment. These two components interact such that knowledge of the world guides information sampling, and what is sampled updates knowledge. Like human decision makers, the model displays strategic sampling behavior, such as terminating information search when sufficient information has been sampled and adaptively adjusting the search path in response to previously sampled information. The model also shows human-like failure modes. For example, when information exploitation is prioritized over exploration, the bidirectional influences between information sampling and learning can lead to the development of beliefs that systematically differ from reality. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
4
|
The effects of eye movements on the visual cortical responding variability based on a spiking network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
5
|
Liu Y, Luan Y, Zhang G, Hu H, Jiang J, Zhang L, Qing T, Zou Y, Yang D, Xi L. Human reliability analysis for operators in the digital main control rooms of nuclear power plants. J NUCL SCI TECHNOL 2020. [DOI: 10.1080/00223131.2020.1720848] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Yanzi Liu
- State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Design Company LTD. (Shenzhen), Shenzhen, Guangdong Province, China
| | - Yu Luan
- State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Design Company LTD. (Shenzhen), Shenzhen, Guangdong Province, China
| | - Gang Zhang
- State Key Laboratory of Nuclear Power Safety Monitoring Technology and Equipment, China Nuclear Power Design Company LTD. (Shenzhen), Shenzhen, Guangdong Province, China
| | - Hong Hu
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| | - Jianjun Jiang
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| | - Li Zhang
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| | - Tao Qing
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| | - Yanhua Zou
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| | - Dang Yang
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| | - Liaozi Xi
- School of Safety and Environment Engineering, Hunan Institute Of Technology, HengYang, HuNan Province, China
| |
Collapse
|
6
|
MacDonald K, Marchman VA, Fernald A, Frank MC. Children flexibly seek visual information to support signed and spoken language comprehension. J Exp Psychol Gen 2019; 149:1078-1096. [PMID: 31750713 DOI: 10.1037/xge0000702] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
7
|
Meder B, Nelson JD, Jones M, Ruggeri A. Stepwise versus globally optimal search in children and adults. Cognition 2019; 191:103965. [PMID: 31415923 DOI: 10.1016/j.cognition.2019.05.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 04/29/2019] [Accepted: 05/02/2019] [Indexed: 11/19/2022]
Abstract
How do children and adults search for information when stepwise-optimal strategies fail to identify the most efficient query? The value of questions is often measured in terms of stepwise information gain (expected reduction of entropy on the next time step) or other stepwise-optimal methods. However, such myopic models are not guaranteed to identify the most efficient sequence of questions, that is, the shortest path to the solution. In two experiments we contrast stepwise methods with globally optimal strategies and study how younger children (around age 8, N = 52), older children (around age 10, N = 99), and adults (N = 101) search in a 20-questions game where planning ahead is required to identify the most efficient first question. Children searched as efficiently as adults, but also as myopically. Both children and adults tended to rely on heuristic stepwise-optimal strategies, focusing primarily on questions' implications for the next time step, rather than planning ahead.
Collapse
Affiliation(s)
- Björn Meder
- MPRG iSearch, Max Planck Institute for Human Development, Berlin, Germany; Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany; University of Erfurt, Germany.
| | - Jonathan D Nelson
- MPRG iSearch, Max Planck Institute for Human Development, Berlin, Germany; University of Surrey, United Kingdom
| | - Matt Jones
- University of Colorado Boulder, United States
| | - Azzurra Ruggeri
- MPRG iSearch, Max Planck Institute for Human Development, Berlin, Germany; Technical University of Munich, Germany
| |
Collapse
|
8
|
Crupi V, Nelson JD, Meder B, Cevolani G, Tentori K. Generalized Information Theory Meets Human Cognition: Introducing a Unified Framework to Model Uncertainty and Information Search. Cogn Sci 2018; 42:1410-1456. [PMID: 29911318 DOI: 10.1111/cogs.12613] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Revised: 03/05/2018] [Accepted: 03/06/2018] [Indexed: 11/26/2022]
Abstract
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism.
Collapse
Affiliation(s)
- Vincenzo Crupi
- Center for Logic, Language, and Cognition, Department of Philosophy and Education, University of Turin
| | - Jonathan D Nelson
- School of Psychology, University of Surrey
- Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development
| | - Björn Meder
- Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development
| | | | - Katya Tentori
- Center for Mind/Brain Sciences, University of Trento
| |
Collapse
|
9
|
Yang SCH, Lengyel M, Wolpert DM. Active sensing in the categorization of visual patterns. eLife 2016; 5. [PMID: 26880546 PMCID: PMC4764587 DOI: 10.7554/elife.12215] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2015] [Accepted: 12/06/2015] [Indexed: 11/23/2022] Open
Abstract
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI:http://dx.doi.org/10.7554/eLife.12215.001 To interact with the world around us, we need to decide how best to direct our eyes and other senses to extract relevant information. When viewing a scene, people fixate on a sequence of locations by making fast eye movements to shift their gaze between locations. Previous studies have shown that these fixations are not random, but are actively chosen so that they depend on both the scene and the task. For example, in order to determine the gender or emotion from a face, we fixate around the eyes or the nose, respectively. Previous studies have only analyzed whether humans choose the optimal fixation locations in very simple situations, such as searching for a square among a set of circles. Therefore, it is not known how efficient we are at optimizing our rapid eye movements to extract high-level information from visual scenes, such as determining whether an image of fur belongs to a cheetah or a zebra. Yang, Lengyel and Wolpert developed a mathematical model that determines the amount of information that can be extracted from an image by any set of fixation locations. The model could also work out the next best fixation location that would maximize the amount of information that could be collected. This model shows that humans are about 70% efficient in planning each eye movement. Furthermore, it suggests that the inefficiencies are largely caused by imperfect vision and inaccurate eye movements. Yang, Lengyel and Wolpert’s findings indicate that we combine information from multiple locations to direct our eye movements so that we can maximize the information we collect from our surroundings. The next challenge is to extend this mathematical model and experimental approach to even more complex visual tasks, such as judging an individual’s intentions, or working out the relationships between people in real-life settings. DOI:http://dx.doi.org/10.7554/eLife.12215.002
Collapse
Affiliation(s)
- Scott Cheng-Hsin Yang
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom.,Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Daniel M Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
10
|
Scholz A, von Helversen B, Rieskamp J. Eye movements reveal memory processes during similarity- and rule-based decision making. Cognition 2015; 136:228-46. [DOI: 10.1016/j.cognition.2014.11.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 11/13/2014] [Accepted: 11/17/2014] [Indexed: 11/25/2022]
|
11
|
Kanan C, Bseiso DNF, Ray NA, Hsiao JH, Cottrell GW. Humans have idiosyncratic and task-specific scanpaths for judging faces. Vision Res 2015; 108:67-76. [PMID: 25641371 DOI: 10.1016/j.visres.2015.01.013] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Revised: 10/03/2014] [Accepted: 01/06/2015] [Indexed: 11/24/2022]
Abstract
Since Yarbus's seminal work, vision scientists have argued that our eye movement patterns differ depending upon our task. This has recently motivated the creation of multi-fixation pattern analysis algorithms that try to infer a person's task (or mental state) from their eye movements alone. Here, we introduce new algorithms for multi-fixation pattern analysis, and we use them to argue that people have scanpath routines for judging faces. We tested our methods on the eye movements of subjects as they made six distinct judgments about faces. We found that our algorithms could detect whether a participant is trying to distinguish angriness, happiness, trustworthiness, tiredness, attractiveness, or age. However, our algorithms were more accurate at inferring a subject's task when only trained on data from that subject than when trained on data gathered from other subjects, and we were able to infer the identity of our subjects using the same algorithms. These results suggest that (1) individuals have scanpath routines for judging faces, and that (2) these are diagnostic of that subject, but that (3) at least for the tasks we used, subjects do not converge on the same "ideal" scanpath pattern. Whether universal scanpath patterns exist for a task, we suggest, depends on the task's constraints and the level of expertise of the subject.
Collapse
Affiliation(s)
- Christopher Kanan
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA.
| | - Dina N F Bseiso
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA.
| | - Nicholas A Ray
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA.
| | - Janet H Hsiao
- Department of Psychology, University of Hong Kong, Hong Kong.
| | - Garrison W Cottrell
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
12
|
Catenacci Volpi N, Quinton JC, Pezzulo G. How active perception and attractor dynamics shape perceptual categorization: a computational model. Neural Netw 2014; 60:1-16. [PMID: 25105744 DOI: 10.1016/j.neunet.2014.06.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2013] [Revised: 06/21/2014] [Accepted: 06/22/2014] [Indexed: 10/25/2022]
Abstract
We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent-environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as "evidence" for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.
Collapse
Affiliation(s)
- Nicola Catenacci Volpi
- School of Computer Science, Adaptive Systems Research Group University of Hertfordshire, Collage Lane Campus, College Ln, Hatfield, Hertfordshire AL10 9AB, United Kingdom.
| | - Jean Charles Quinton
- Clermont University, Blaise Pascal University, Pascal Institute, BP 10448, F-63000 Clermont-Ferrand, France; CNRS, UMR 6602, Pascal Institute, F-63171 Aubiere, France.
| | - Giovanni Pezzulo
- Istituto di Scienze e Tecnologie della Cognizione - CNR, Via S. Martino della Battaglia, 44 - 00185 Rome, Italy.
| |
Collapse
|
13
|
Jahn G, Braatz J. Memory indexing of sequential symptom processing in diagnostic reasoning. Cogn Psychol 2014; 68:59-97. [DOI: 10.1016/j.cogpsych.2013.11.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2013] [Revised: 11/11/2013] [Accepted: 11/12/2013] [Indexed: 01/02/2023]
|
14
|
Children’s sequential information search is sensitive to environmental probabilities. Cognition 2014; 130:74-80. [DOI: 10.1016/j.cognition.2013.09.007] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2013] [Revised: 09/24/2013] [Accepted: 09/25/2013] [Indexed: 11/17/2022]
|
15
|
Vigo R, Zeigler DE, Halsey PA. Gaze and informativeness during category learning: Evidence for an inverse relation. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.800931] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
16
|
Information search with situation-specific reward functions. JUDGMENT AND DECISION MAKING 2012. [DOI: 10.1017/s1930297500002977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractThe goal of obtaining information to improve classification accuracy can strongly conflict with the goal of obtaining information for improving payoffs. Two environments with such a conflict were identified through computer optimization. Three subsequent experiments investigated people’s search behavior in these environments. Experiments 1 and 2 used a multiple-cue probabilistic category-learning task to convey environmental probabilities. In a subsequent search task subjects could query only a single feature before making a classification decision. The crucial manipulation concerned the search-task reward structure. The payoffs corresponded either to accuracy, with equal rewards associated with the two categories, or to an asymmetric payoff function, with different rewards associated with each category. In Experiment 1, in which learning-task feedback corresponded to the true category, people later preferentially searched the accuracy-maximizing feature, whether or not this would improve monetary rewards. In Experiment 2, an asymmetric reward structure was used during learning. Subjects searched the reward-maximizing feature when asymmetric payoffs were preserved in the search task. However, if search-task payoffs corresponded to accuracy, subjects preferentially searched a feature that was suboptimal for reward and accuracy alike. Importantly, this feature would have been most useful, under the learning-task payoff structure. Experiment 3 found that, if words and numbers are used to convey environmental probabilities, neither reward nor accuracy consistently predicts search. These findings emphasize the necessity of taking into account people’s goals and search-and-decision processes during learning, thereby challenging current models of information search.
Collapse
|
17
|
How prior knowledge affects selective attention during category learning: An eyetracking study. Mem Cognit 2010; 39:649-65. [PMID: 21264587 DOI: 10.3758/s13421-010-0050-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
18
|
Nelson JD, McKenzie CRM, Cottrell GW, Sejnowski TJ. Experience matters: information acquisition optimizes probability gain. Psychol Sci 2010; 21:960-9. [PMID: 20525915 PMCID: PMC2926803 DOI: 10.1177/0956797610372637] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information-information gain, Kullback-Liebler distance, probability gain (error minimization), and impact-are equally consistent with extant data on human information acquisition. Three experiments, designed via computer optimization to be maximally informative, tested which of these theories best describes human information search. Experiment 1, which used natural sampling and experience-based learning to convey environmental probabilities, found that probability gain explained subjects' information search better than the other statistical theories or the probability-of-certainty heuristic. Experiments 1 and 2 found that subjects behaved differently when the standard method of verbally presented summary statistics (rather than experience-based learning) was used to convey environmental probabilities. Experiment 3 found that subjects' preference for probability gain is robust, suggesting that the other models contribute little to subjects' search behavior.
Collapse
|
19
|
Ferro M, Ognibene D, Pezzulo G, Pirrelli V. Reading as active sensing: a computational model of gaze planning in word recognition. Front Neurorobot 2010; 4:6. [PMID: 20577589 PMCID: PMC2889689 DOI: 10.3389/fnbot.2010.00006] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2009] [Accepted: 04/28/2010] [Indexed: 11/13/2022] Open
Abstract
We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting.
Collapse
Affiliation(s)
- Marcello Ferro
- Istituto di Linguistica Computazionale "Antonio Zampolli" - CNR Pisa, Italy
| | | | | | | |
Collapse
|
20
|
Coen-Cagli R, Coraggio P, Napoletano P, Schwartz O, Ferraro M, Boccignone G. Visuomotor characterization of eye movements in a drawing task. Vision Res 2009; 49:810-8. [PMID: 19268685 DOI: 10.1016/j.visres.2009.02.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2008] [Revised: 02/19/2009] [Accepted: 02/25/2009] [Indexed: 11/28/2022]
Abstract
Understanding visuomotor coordination requires the study of tasks that engage mechanisms for the integration of visual and motor information; in this paper we choose a paradigmatic yet little studied example of such a task, namely realistic drawing. On the one hand, our data indicate that the motor task has little influence on which regions of the image are overall most likely to be fixated: salient features are fixated most often. Viceversa, the effect of motor constraints is revealed in the temporal aspect of the scanpaths: (1) subjects direct their gaze to an object mostly when they are acting upon (drawing) it; and (2) in support of graphically continuous hand movements, scanpaths resemble edge-following patterns along image contours. For a better understanding of such properties, a computational model is proposed in the form of a novel kind of Dynamic Bayesian Network, and simulation results are compared with human eye-hand data.
Collapse
Affiliation(s)
- Ruben Coen-Cagli
- Department of Neuroscience, Albert Einstein College of Medicine of Yeshiva University, 1410 Pelham Pkwy S., Rm 921, Bronx, New York 10461, USA.
| | | | | | | | | | | |
Collapse
|
21
|
Barrington L, Marks TK, Hsiao JHW, Cottrell GW. NIMBLE: a kernel density model of saccade-based visual memory. J Vis 2008; 8:17.1-14. [PMID: 19146318 DOI: 10.1167/8.14.17] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2008] [Accepted: 07/15/2008] [Indexed: 11/24/2022] Open
Abstract
We present a Bayesian version of J. Lacroix, J. Murre, and E. Postma's (2006) Natural Input Memory (NIM) model of saccadic visual memory. Our model, which we call NIMBLE (NIM with Bayesian Likelihood Estimation), uses a cognitively plausible image sampling technique that provides a foveated representation of image patches. We conceive of these memorized image fragments as samples from image class distributions and model the memory of these fragments using kernel density estimation. Using these models, we derive class-conditional probabilities of new image fragments and combine individual fragment probabilities to classify images. Our Bayesian formulation of the model extends easily to handle multi-class problems. We validate our model by demonstrating human levels of performance on a face recognition memory task and high accuracy on multi-category face and object identification. We also use NIMBLE to examine the change in beliefs as more fixations are taken from an image. Using fixation data collected from human subjects, we directly compare the performance of NIMBLE's memory component to human performance, demonstrating that using human fixation locations allows NIMBLE to recognize familiar faces with only a single fixation.
Collapse
Affiliation(s)
- Luke Barrington
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA
| | | | | | | |
Collapse
|