1
|
Madsen JK, de-Wit L, Ayton P, Brick C, de-Moliere L, Groom CJ. Behavioral science should start by assuming people are reasonable. Trends Cogn Sci 2024; 28:583-585. [PMID: 38763803 DOI: 10.1016/j.tics.2024.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 04/19/2024] [Accepted: 04/22/2024] [Indexed: 05/21/2024]
Abstract
Should policymaking assume humans are irrational? Using empirical, theoretical, and philosophical arguments, we suggest a more useful frame is that human behavior is reasonable. Through identifying goals and systemic factors shaping behavior, we suggest that assuming people are reasonable enables behavioral science to be more effective in shaping public policy.
Collapse
Affiliation(s)
- Jens Koed Madsen
- Department of Psychological and Behavioural Sciences, London School of Economics and Political Science, Houghton Street, WC2A 2AE, London, UK
| | - Lee de-Wit
- Department of Psychology, University of Cambridge, Downing Street, CB2, 3EB, Cambridge, UK.
| | - Peter Ayton
- Centre for Decision Research, University of Leeds, Woodhouse Lane, LS2 9JT, Leeds, UK
| | - Cameron Brick
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129, 1001, NK, Amsterdam, The Netherlands; Inland Norway University of Applied Sciences, Holsetgata 31, 2318 Hamar, Norway
| | | | - Carla J Groom
- UK Department for Work & Pensions, Human-Centred Design Science Division, Caxton House, Tothill St, London SW1H 9NA, UK
| |
Collapse
|
2
|
Park H, Arazi A, Talluri BC, Celotto M, Panzeri S, Stocker AA, Donner TH. Confirmation Bias through Selective Use of Evidence in Human Cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.21.600060. [PMID: 38979146 PMCID: PMC11230165 DOI: 10.1101/2024.06.21.600060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Decision-makers often process new evidence selectively, depending on their current beliefs about the world. We asked whether such confirmation biases result from biases in the encoding of sensory evidence in the brain, or alternatively in the utilization of encoded evidence for behavior. Human participants estimated the source of a sequence of visual-spatial evidence samples while we measured cortical population activity with magnetoencephalography (MEG). Halfway through the sequence, participants were prompted to judge the more likely source category. Their processing of subsequent evidence depended on its consistency with the previously chosen category, but the encoding of evidence in cortical activity did not. Instead, the encoded evidence in parietal and primary visual cortex contributed less to the estimation report when that evidence was inconsistent with the previous choice. We conclude that confirmation bias originates from the way in which decision-makers utilize information encoded in the brain. This provides room for deliberative control.
Collapse
Affiliation(s)
- Hame Park
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistraße 52, Hamburg 20251, Germany
| | - Ayelet Arazi
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistraße 52, Hamburg 20251, Germany
| | - Bharath Chandra Talluri
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistraße 52, Hamburg 20251, Germany
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, USA
| | - Marco Celotto
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, 20251, Hamburg, Germany
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems, Istituto Italiano Di Tecnologia, 38068, Rovereto, Italy
| | - Stefano Panzeri
- Institute for Neural Information Processing, Center for Molecular Neurobiology, University Medical Center Hamburg-Eppendorf, 20251, Hamburg, Germany
| | - Alan A Stocker
- Department of Psychology, University of Pennsylvania, 3710 Hamilton walk Philadelphia, PA 19106 USA
| | - Tobias H Donner
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistraße 52, Hamburg 20251, Germany
- Bernstein Center for Computational Neuroscience Berlin, Humboldt-University Berlin, Philippstr. 13, Haus 6, 10115 Berlin
| |
Collapse
|
3
|
O’Leary AP, Fletcher W. Thinking about Believing: Can Metacognitive Reflection Encourage Belief Updating? J Intell 2024; 12:47. [PMID: 38786649 PMCID: PMC11122332 DOI: 10.3390/jintelligence12050047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 02/01/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
People often cling to their beliefs even in the face of counterevidence. The current study explored metacognitive reflection as a potential driver for belief updating. In a randomized controlled experiment (n = 155), participants rated their degree of agreement with a statement regarding genetic modification in humans. Following this, participants were presented with a passage containing an argument counter to their indicated belief. Participants in the metacognition condition were asked to deeply reflect on the ways in which the passage was similar to or different from their current beliefs. Participants in the control condition were asked to engage in more shallow reflection on the composition of the passage. After reflecting on the counterevidence, participants were asked to again rate their agreement with the statement regarding human gene modification. Both groups updated their initial beliefs to be more consistent with the presented counterevidence. Although greater belief updating was observed in those who metacognitively reflected on the passage, this effect did not reach significance (p = .055). These findings suggest that reflecting on counterevidence has the potential to encourage belief updating, regardless of whether that reflection is metacognitive in nature, and provide promise for future work investigating the role of metacognition in belief updating.
Collapse
|
4
|
Schulz L, Bhui R. Political reinforcement learners. Trends Cogn Sci 2024; 28:210-222. [PMID: 38195364 DOI: 10.1016/j.tics.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 12/09/2023] [Accepted: 12/11/2023] [Indexed: 01/11/2024]
Abstract
Politics can seem home to the most calculating and yet least rational elements of humanity. How might we systematically characterize this spectrum of political cognition? Here, we propose reinforcement learning (RL) as a unified framework to dissect the political mind. RL describes how agents algorithmically navigate complex and uncertain domains like politics. Through this computational lens, we outline three routes to political differences, stemming from variability in agents' conceptions of a problem, the cognitive operations applied to solve the problem, or the backdrop of information available from the environment. A computational vantage on maladies of the political mind offers enhanced precision in assessing their causes, consequences, and cures.
Collapse
Affiliation(s)
- Lion Schulz
- Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8-14, 72076 Tübingen, Germany.
| | - Rahul Bhui
- Sloan School of Management and Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
5
|
Lefebvre G, Deroy O, Bahrami B. The roots of polarization in the individual reward system. Proc Biol Sci 2024; 291:20232011. [PMID: 38412967 PMCID: PMC10898967 DOI: 10.1098/rspb.2023.2011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024] Open
Abstract
Polarization raises concerns for democracy and society, which have expanded in the internet era where (mis)information has become ubiquitous, its transmission faster than ever, and the freedom and means of opinion expressions are expanding. The origin of polarization however remains unclear, with multiple social and emotional factors and individual reasoning biases likely to explain its current forms. In the present work, we adopt a principled approach and show that polarization tendencies can take root in biased reward processing of new information in favour of choice confirmatory evidence. Through agent-based simulations, we show that confirmation bias in individual learning is an independent mechanism and could be sufficient for creating polarization at group level independently of any additional assumptions about the opinions themselves, a priori beliefs about them, information transmission mechanisms or the structure of social relationship between individuals. This generative process can interact with polarization mechanisms described elsewhere, but constitutes an entrenched biological tendency that helps explain the extraordinary resilience of polarization against mitigating efforts such as dramatic informational change in the environment.
Collapse
Affiliation(s)
- Germain Lefebvre
- Crowd Cognition Group, Ludwig Maximilian Unversität, Gabelsbergerstr 62, Munich 80333, Bavaria, Germany
| | - Ophélia Deroy
- Philosophy, LMU, Geschwister Scholl Platz 1, Munich 80539, Bavaria, Germany
| | - Bahador Bahrami
- Crowd Cognition Group, Ludwig Maximilian Unversität, Gabelsbergerstr 62, Munich 80333, Bavaria, Germany
| |
Collapse
|
6
|
Fischer H, Huff M, Anders G, Said N. Metacognition, public health compliance, and vaccination willingness. Proc Natl Acad Sci U S A 2023; 120:e2105425120. [PMID: 37851676 PMCID: PMC10614760 DOI: 10.1073/pnas.2105425120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 02/19/2023] [Indexed: 10/20/2023] Open
Abstract
Metacognition, our ability to reflect on our own beliefs, manifests itself in the confidence we have in these beliefs, and helps us guide our behavior in complex and uncertain environments. Here, we provide empirical tests of the importance of metacognition during the pandemic. Bayesian and frequentist analyses demonstrate that citizens with higher metacognitive sensitivity-where confidence differentiates correct from incorrect COVID-19 beliefs-reported higher willingness to vaccinate against COVID-19, and higher compliance with recommended public health measures. Notably, this benefit of accurate introspection held controlling for the accuracy of COVID-19 beliefs. By demonstrating how vaccination willingness and compliance may relate to insight into the varying accuracy of beliefs, rather than only the accuracy of the beliefs themselves, this research highlights the critical role of metacognitive ability in times of crisis. However, we do not find sufficient evidence to conclude that citizens with higher metacognitive sensitivity were more likely to comply with recommended public health measures when controlling for the absolute level of the confidence citizens had in their COVID-19 beliefs.
Collapse
Affiliation(s)
- Helen Fischer
- Perception and Action Lab, Leibniz Institut für Wissensmedien, Tübingen72076, Germany
| | - Markus Huff
- Perception and Action Lab, Leibniz Institut für Wissensmedien, Tübingen72076, Germany
- Applied Cognitive Psychology, University of Tübingen, Tübingen72076, Germany
| | - Gerrit Anders
- Perception and Action Lab, Leibniz Institut für Wissensmedien, Tübingen72076, Germany
| | - Nadia Said
- Applied Cognitive Psychology, University of Tübingen, Tübingen72076, Germany
| |
Collapse
|
7
|
Lakhlifi C, Rohaut B. Heuristics and biases in medical decision-making under uncertainty: The case of neuropronostication for consciousness disorders. Presse Med 2023; 52:104181. [PMID: 37821058 DOI: 10.1016/j.lpm.2023.104181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 10/06/2023] [Indexed: 10/13/2023] Open
Abstract
Neuropronostication for consciousness disorders can be very complex and prone to high uncertainty. Despite notable advancements in the development of dedicated scales and physiological markers using innovative paradigms, these technical progressions are often overshadowed by factors intrinsic to the medical environment. Beyond the scarcity of objective data guiding medical decisions, factors like time pressure, fatigue, multitasking, and emotional load can drive clinicians to rely more on heuristic-based clinical reasoning. Such an approach, albeit beneficial under certain circumstances, may lead to systematic error judgments and impair medical decisions, especially in complex and uncertain environments. After a brief review of the main theoretical frameworks, this paper explores the influence of clinicians' cognitive biases on clinical reasoning and decision-making in the challenging context of neuroprognostication for consciousness disorders. The discussion further revolves around developing and implementing various strategies designed to mitigate these biases and their impact, aiming to enhance the quality of care and the patient safety.
Collapse
Affiliation(s)
- Camille Lakhlifi
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, APHP, Hôpital de la Pitié Salpêtrière, Paris, France; Université Paris Cité, Paris, France
| | - Benjamin Rohaut
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, APHP, Hôpital de la Pitié Salpêtrière, Paris, France; AP-HP, Hôpital de la Pitié Salpêtrière, MIR Neuro, DMU Neurosciences, Paris, France.
| |
Collapse
|
8
|
Sharot T, Rollwage M, Sunstein CR, Fleming SM. Why and When Beliefs Change. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:142-151. [PMID: 35939828 DOI: 10.1177/17456916221082967] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Why people do or do not change their beliefs has been a long-standing puzzle. Sometimes people hold onto false beliefs despite ample contradictory evidence; sometimes they change their beliefs without sufficient reason. Here, we propose that the utility of a belief is derived from the potential outcomes associated with holding it. Outcomes can be internal (e.g., positive/negative feelings) or external (e.g., material gain/loss), and only some are dependent on belief accuracy. Belief change can then be understood as an economic transaction in which the multidimensional utility of the old belief is compared against that of the new belief. Change will occur when potential outcomes alter across attributes, for example because of changing environments or when certain outcomes are made more or less salient.
Collapse
Affiliation(s)
- Tali Sharot
- Department of Experimental Psychology, University College London.,Max Planck University College London Centre for Computational Psychiatry and Ageing Research.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Max Rollwage
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.,Wellcome Centre for Human Neuroimaging, University College London
| | | | - Stephen M Fleming
- Department of Experimental Psychology, University College London.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology.,Wellcome Centre for Human Neuroimaging, University College London
| |
Collapse
|
9
|
Baer C, Kidd C. Learning with certainty in childhood. Trends Cogn Sci 2022; 26:887-896. [PMID: 36085134 DOI: 10.1016/j.tics.2022.07.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/13/2022] [Accepted: 07/21/2022] [Indexed: 10/14/2022]
Abstract
Learners use certainty to guide learning. They maintain existing beliefs when certain, but seek further information when they feel uninformed. Here, we review developmental evidence that this metacognitive strategy does not require reportable processing. Uncertainty prompts nonverbal human infants and nonhuman animals to engage in strategies like seeking help, searching for additional information, or opting out. Certainty directs children's attention and active learning strategies and provides a common metric for comparing and integrating conflicting beliefs across people. We conclude that certainty is a continuous, domain-general signal of belief quality even early in life.
Collapse
Affiliation(s)
- Carolyn Baer
- Department of Psychology, University of California, Berkeley, CA, USA.
| | - Celeste Kidd
- Department of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
10
|
Dautriche I, Goupil L, Smith K, Rabagliati H. Two-Year-Olds' Eye Movements Reflect Confidence in Their Understanding of Words. Psychol Sci 2022; 33:1842-1856. [PMID: 36126649 DOI: 10.1177/09567976221105208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
We studied the fundamental issue of whether children evaluate the reliability of their language interpretation, that is, their confidence in understanding words. In two experiments, 2-year-olds (Experiment 1: N = 50; Experiment 2: N = 60) saw two objects and heard one of them being named; both objects were then hidden behind screens and children were asked to look toward the named object, which was eventually revealed. When children knew the label used, they showed increased postdecision persistence after a correct compared with an incorrect anticipatory look, a marker of confidence in word comprehension (Experiment 1). When interacting with an unreliable speaker, children showed accurate word comprehension but reduced confidence in the accuracy of their own choice, indicating that children's confidence estimates are influenced by social information (Experiment 2). Thus, by the age of 2 years, children can estimate their confidence during language comprehension, long before they can talk about their linguistic skills.
Collapse
Affiliation(s)
- Isabelle Dautriche
- Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique (CNRS).,Institute of Language, Communication and the Brain, Aix-Marseille University, CNRS
| | - Louise Goupil
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS.,Department of Psychology, University of East London
| | - Kenny Smith
- School of Philosophy, Psychology and Language Sciences, The University of Edinburgh
| | - Hugh Rabagliati
- School of Philosophy, Psychology and Language Sciences, The University of Edinburgh
| |
Collapse
|
11
|
Miyoshi K, Sakamoto Y, Nishida S. On the assumptions behind metacognitive measurements: Implications for theory and practice. J Vis 2022; 22:18. [PMID: 36149676 PMCID: PMC9520519 DOI: 10.1167/jov.22.10.18] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 08/30/2022] [Indexed: 12/12/2022] Open
Abstract
Theories of visual confidence have largely been grounded in the gaussian signal detection framework. This framework is so dominant that idiosyncratic consequences from this distributional assumption have remained unappreciated. This article reports systematic comparisons of the gaussian signal detection framework to its logistic counterpart in the measurement of metacognitive accuracy. Because of the difference in their distribution kurtosis, these frameworks are found to provide different perspectives regarding the efficiency of confidence rating relative to objective decision (the logistic model intrinsically gives greater meta-d'/d' ratio than the gaussian model). These frameworks can also provide opposing conclusions regarding the metacognitive inefficiency along the internal evidence continuum (whether meta-d' is larger or smaller for higher levels of confidence). Previous theories developed on these lines of analysis may need to be revisited as the gaussian and logistic metacognitive models received somewhat equivalent support in our quantitative model comparisons. Despite these discrepancies, however, we found that across-condition or across-participant comparisons of metacognitive measures are relatively robust against the distributional assumptions, which provides much assurance to conventional research practice. We hope this article promotes the awareness for the significance of hidden modeling assumptions, contributing to the cumulative development of the relevant field.
Collapse
Affiliation(s)
| | | | - Shin'ya Nishida
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| |
Collapse
|
12
|
Palminteri S, Lebreton M. The computational roots of positivity and confirmation biases in reinforcement learning. Trends Cogn Sci 2022; 26:607-621. [PMID: 35662490 DOI: 10.1016/j.tics.2022.04.005] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 04/13/2022] [Accepted: 04/18/2022] [Indexed: 12/16/2022]
Abstract
Humans do not integrate new information objectively: outcomes carrying a positive affective value and evidence confirming one's own prior belief are overweighed. Until recently, theoretical and empirical accounts of the positivity and confirmation biases assumed them to be specific to 'high-level' belief updates. We present evidence against this account. Learning rates in reinforcement learning (RL) tasks, estimated across different contexts and species, generally present the same characteristic asymmetry, suggesting that belief and value updating processes share key computational principles and distortions. This bias generates over-optimistic expectations about the probability of making the right choices and, consequently, generates over-optimistic reward expectations. We discuss the normative and neurobiological roots of these RL biases and their position within the greater picture of behavioral decision-making theories.
Collapse
Affiliation(s)
- Stefano Palminteri
- Laboratoire de Neurosciences Cognitives et Computationnelles, Institut National de la Santé et Recherche Médicale, Paris, France; Département d'Études Cognitives, Ecole Normale Supérieure, Paris, France; Université de Recherche Paris Sciences et Lettres, Paris, France.
| | - Maël Lebreton
- Paris School of Economics, Paris, France; LabNIC, Department of Fundamental Neurosciences, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Science, Geneva, Switzerland.
| |
Collapse
|
13
|
Brody DC. Noise, Fake News, and Tenacious Bayesians. Front Psychol 2022; 13:797904. [PMID: 35602675 PMCID: PMC9115576 DOI: 10.3389/fpsyg.2022.797904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 04/13/2022] [Indexed: 11/13/2022] Open
Abstract
A modeling framework, based on the theory of signal processing, for characterizing the dynamics of systems driven by the unraveling of information is outlined, and is applied to describe the process of decision making. The model input of this approach is the specification of the flow of information. This enables the representation of (i) reliable information, (ii) noise, and (iii) disinformation, in a unified framework. Because the approach is designed to characterize the dynamics of the behavior of people, it is possible to quantify the impact of information control, including those resulting from the dissemination of disinformation. It is shown that if a decision maker assigns an exceptionally high weight on one of the alternative realities, then under the Bayesian logic their perception hardly changes in time even if evidences presented indicate that this alternative corresponds to a false reality. Thus, confirmation bias need not be incompatible with Bayesian updating. By observing the role played by noise in other areas of natural sciences, where noise is used to excite the system away from false attractors, a new approach to tackle the dark forces of fake news is proposed.
Collapse
Affiliation(s)
- Dorje C. Brody
- Department of Mathematics, University of Surrey, Guildford, United Kingdom
| |
Collapse
|
14
|
Paulus D, Fathi R, Fiedrich F, de Walle BV, Comes T. On the Interplay of Data and Cognitive Bias in Crisis Information Management: An Exploratory Study on Epidemic Response. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2022:1-25. [PMID: 35342331 PMCID: PMC8938164 DOI: 10.1007/s10796-022-10241-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/02/2022] [Indexed: 06/14/2023]
Abstract
Humanitarian crises, such as the 2014 West Africa Ebola epidemic, challenge information management and thereby threaten the digital resilience of the responding organizations. Crisis information management (CIM) is characterised by the urgency to respond despite the uncertainty of the situation. Coupled with high stakes, limited resources and a high cognitive load, crises are prone to induce biases in the data and the cognitive processes of analysts and decision-makers. When biases remain undetected and untreated in CIM, they may lead to decisions based on biased information, increasing the risk of an inefficient response. Literature suggests that crisis response needs to address the initial uncertainty and possible biases by adapting to new and better information as it becomes available. However, we know little about whether adaptive approaches mitigate the interplay of data and cognitive biases. We investigated this question in an exploratory, three-stage experiment on epidemic response. Our participants were experienced practitioners in the fields of crisis decision-making and information analysis. We found that analysts fail to successfully debias data, even when biases are detected, and that this failure can be attributed to undervaluing debiasing efforts in favor of rapid results. This failure leads to the development of biased information products that are conveyed to decision-makers, who consequently make decisions based on biased information. Confirmation bias reinforces the reliance on conclusions reached with biased data, leading to a vicious cycle, in which biased assumptions remain uncorrected. We suggest mindful debiasing as a possible counter-strategy against these bias effects in CIM.
Collapse
Affiliation(s)
- David Paulus
- Department of Multi-Actor Systems, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands
| | - Ramian Fathi
- Institute for Public Safety and Emergency Management, University of Wuppertal, Gaußstraße 20, 42119 Wuppertal, Germany
| | - Frank Fiedrich
- Institute for Public Safety and Emergency Management, University of Wuppertal, Gaußstraße 20, 42119 Wuppertal, Germany
| | - Bartel Van de Walle
- UNU-MERIT, United Nations University, Boschstraat 24, 6211 AX Maastricht, The Netherlands
| | - Tina Comes
- Faculty of Technolgy, Policy and Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands
| |
Collapse
|
15
|
On second thoughts: changes of mind in decision-making. Trends Cogn Sci 2022; 26:419-431. [DOI: 10.1016/j.tics.2022.02.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 02/10/2022] [Accepted: 02/12/2022] [Indexed: 01/17/2023]
|
16
|
Cortese A. Metacognitive resources for adaptive learning⋆. Neurosci Res 2021; 178:10-19. [PMID: 34534617 DOI: 10.1016/j.neures.2021.09.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 09/07/2021] [Accepted: 09/08/2021] [Indexed: 10/20/2022]
Abstract
Biological organisms display remarkably flexible behaviours. This is an area of active investigation, in particular in the fields of artificial intelligence, computational and cognitive neuroscience. While inductive biases and broader cognitive functions are undoubtedly important, the ability to monitor and evaluate one's performance or oneself -- metacognition -- strikes as a powerful resource for efficient learning. Often measured as decision confidence in neuroscience and psychology experiments, metacognition appears to reflect a broad range of abstraction levels and downstream behavioural effects. Within this context, the formal investigation of how metacognition interacts with learning processes is a recent endeavour. Of special interest are the neural and computational underpinnings of confidence and reinforcement learning modules. This review discusses a general hierarchy of confidence functions and their neuro-computational relevance for adaptive behaviours. It then introduces novel ways to study the formation and use of meta-representations and nonconscious mental representations related to learning and confidence, and concludes with a discussion on outstanding questions and wider perspectives.
Collapse
Affiliation(s)
- Aurelio Cortese
- Computational Neuroscience Labs, ATR Institute International, 619-0288 Kyoto, Japan.
| |
Collapse
|
17
|
Zmigrod L, Tsakiris M. Computational and neurocognitive approaches to the political brain: key insights and future avenues for political neuroscience. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200130. [PMID: 33611999 PMCID: PMC7935136 DOI: 10.1098/rstb.2020.0130] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Although the study of political behaviour has been traditionally restricted to the social sciences, new advances in political neuroscience and computational cognitive science highlight that the biological sciences can offer crucial insights into the roots of ideological thought and action. Echoing the dazzling diversity of human ideologies, this theme issue seeks to reflect the multiplicity of theoretical and methodological approaches to understanding the nature of the political brain. Cutting-edge research along three thematic strands is presented, including (i) computational approaches that zoom in on fine-grained mechanisms underlying political behaviour, (ii) neurocognitive perspectives that harness neuroimaging and psychophysiological techniques to study ideological processes, and (iii) behavioural studies and policy-minded analyses of such understandings across cultures and across ideological domains. Synthesizing these findings together, the issue elucidates core questions regarding the nature of uncertainty in political cognition, the mechanisms of social influence and the cognitive structure of ideological beliefs. This offers key directions for future biologically grounded research as well as a guiding map for citizens, psychologists and policymakers traversing the uneven landscape of modern polarization, misinformation, intolerance and dogmatism. This article is part of the theme issue ‘The political brain: neurocognitive and computational mechanisms'.
Collapse
Affiliation(s)
- Leor Zmigrod
- Department of Psychology, University of Cambridge, Cambridge, UK.,Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge, UK.,Churchill College, University of Cambridge, Cambridge, UK
| | - Manos Tsakiris
- The Warburg Institute, School of Advanced Study, London, UK.,Department of Psychology, Royal Holloway, University of London, London, UK.,Department of Behavioural and Cognitive Sciences, Faculty of Humanities, Education and Social Sciences, University of Luxembourg, Luxembourg
| |
Collapse
|