1
|
Jiao L, Wang Y, Liu X, Li L, Liu F, Ma W, Guo Y, Chen P, Yang S, Hou B. Causal Inference Meets Deep Learning: A Comprehensive Survey. RESEARCH (WASHINGTON, D.C.) 2024; 7:0467. [PMID: 39257419 PMCID: PMC11384545 DOI: 10.34133/research.0467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 08/11/2024] [Indexed: 09/12/2024]
Abstract
Deep learning relies on learning from extensive data to generate prediction results. This approach may inadvertently capture spurious correlations within the data, leading to models that lack interpretability and robustness. Researchers have developed more profound and stable causal inference methods based on cognitive neuroscience. By replacing the correlation model with a stable and interpretable causal model, it is possible to mitigate the misleading nature of spurious correlations and overcome the limitations of model calculations. In this survey, we provide a comprehensive and structured review of causal inference methods in deep learning. Brain-like inference ideas are discussed from a brain-inspired perspective, and the basic concepts of causal learning are introduced. The article describes the integration of causal inference with traditional deep learning algorithms and illustrates its application to large model tasks as well as specific modalities in deep learning. The current limitations of causal inference and future research directions are discussed. Moreover, the commonly used benchmark datasets and the corresponding download links are summarized.
Collapse
Affiliation(s)
- Licheng Jiao
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Yuhan Wang
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Xu Liu
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Lingling Li
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Fang Liu
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Wenping Ma
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Yuwei Guo
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Puhua Chen
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Shuyuan Yang
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Biao Hou
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| |
Collapse
|
2
|
Cortes RA, Weinberger AB, Green AE. The Mental Models Training App: Enhancing verbal reasoning through a cognitive training mobile application. Front Psychol 2023; 14:1150210. [PMID: 36968736 PMCID: PMC10036765 DOI: 10.3389/fpsyg.2023.1150210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 02/20/2023] [Indexed: 03/12/2023] Open
Abstract
IntroductionReasoning is a complex form of human cognition whose nature has long been debated. While a number of neurocognitive mechanisms for deductive reasoning have been offered, one of the most prominent accounts is Mental Model Theory (MMT). According to MMT, humans are able to manipulate and represent information for reasoning and problem solving by leveraging the brain’s evolved visuospatial resources. Thus, when solving deductive reasoning problems, reasoners build “mental models” of the essential pieces of information conveyed in the premises, with their relations to each other represented spatially—even when the information contained within a reasoning problem is not intrinsically spatial. Crucially, taking a spatially-based approach, such as building mental models, supports higher accuracy on deductive reasoning problems. However, no study has empirically tested whether explicitly training this mental modeling ability leads to improved deductive reasoning performance.MethodTherefore, we designed the Mental Models Training App, a cognitive training mobile application which requires participants to complete increasingly difficult reasoning problems while using an external mental modeling tool. In this preregistered study (https://osf.io/4b7kn), we conducted a between-subjects experiment (N = 301) which compared the Mental Models Training App to 3 distinct control conditions in order to examine which specific components (if any) of the training were causally responsible for improved reasoning performance.ResultsResults demonstrate that, when compared to a passive control condition, the Mental Models Training App led to improvements in adults’ verbal deductive reasoning performance both during and after the training intervention. However, contrary to our preregistered hypotheses, the training-induced improvements were not significantly larger than the effects of the active control conditions—one which included adaptive practice of the reasoning problems, and one which included adaptive practice as well as a spatial alphabetization control task.DiscussionTherefore, while the present results demonstrate the ability of the Mental Models Training App to enhance verbal deductive reasoning, they do not support the hypothesis that directly training participants mental modeling ability yields improved performance beyond the effects of adaptive practice of reasoning. Future research should examine the long-term effects of repeated usage of the Mental Models Training App, as well as transfer effects to other forms of reasoning. Finally, we present the Mental Models Training App as a free mobile application available on the Apple App store (https://apps.apple.com/us/app/mental-models-training/id1664939931), in the hope that this translational research may be utilized by the general public to improve their reasoning ability.
Collapse
Affiliation(s)
- Robert A. Cortes
- Department of Psychology, Georgetown University, Washington, DC, United States
- *Correspondence: Robert A. Cortes,
| | - Adam B. Weinberger
- Department of Psychology, Georgetown University, Washington, DC, United States
| | - Adam E. Green
- Department of Psychology, Georgetown University, Washington, DC, United States
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States
| |
Collapse
|
3
|
Yin P. A comment on arguments of mental model theory of causation. JOURNAL OF COGNITIVE PSYCHOLOGY 2023. [DOI: 10.1080/20445911.2022.2162057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Affiliation(s)
- Pengfei Yin
- School of Teacher education, Baoding University, Baoding, People’s Republic of China
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, School of Psychology, Shaanxi Normal University, Xi’an, People’s Republic of China
| |
Collapse
|
4
|
Gerstenberg T. What would have happened? Counterfactuals, hypotheticals and causal judgements. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210339. [PMID: 36314143 PMCID: PMC9629435 DOI: 10.1098/rstb.2021.0339] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 06/05/2022] [Indexed: 12/21/2023] Open
Abstract
How do people make causal judgements? In this paper, I show that counterfactual simulations are necessary for explaining causal judgements about events, and that hypotheticals do not suffice. In two experiments, participants viewed video clips of dynamic interactions between billiard balls. In Experiment 1, participants either made hypothetical judgements about whether ball B would go through the gate if ball A were not present in the scene, or counterfactual judgements about whether ball B would have gone through the gate if ball A had not been present. Because the clips featured a block in front of the gate that sometimes moved and sometimes stayed put, hypothetical and counterfactual judgements came apart. A computational model that evaluates hypotheticals and counterfactuals by running noisy physical simulations accurately captured participants' judgements. In Experiment 2, participants judged whether ball A caused ball B to go through the gate. The results showed a tight fit between counterfactual and causal judgements, whereas hypotheticals did not predict causal judgements. I discuss the implications of this work for theories of causality, and for studying the development of counterfactual thinking in children. This article is part of the theme issue 'Thinking about possibilities: mechanisms, ontogeny, functions and phylogeny'.
Collapse
Affiliation(s)
- Tobias Gerstenberg
- Stanford University, Department of Psychology, 450 Jane Stanford Way, Bldg 420, Stanford, CA 94305, USA
| |
Collapse
|
5
|
Andonovski N. Episodic representation: A mental models account. Front Psychol 2022; 13:899371. [PMID: 35936308 PMCID: PMC9355728 DOI: 10.3389/fpsyg.2022.899371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 06/30/2022] [Indexed: 11/13/2022] Open
Abstract
This paper offers a modeling account of episodic representation. I argue that the episodic system constructs mental models: representations that preserve the spatiotemporal structure of represented domains. In prototypical cases, these domains are events: occurrences taken by subjects to have characteristic structures, dynamics and relatively determinate beginnings and ends. Due to their simplicity and manipulability, mental event models can be used in a variety of cognitive contexts: in remembering the personal past, but also in future-oriented and counterfactual imagination. As structural representations, they allow surrogative reasoning, supporting inferences about their constituents which can be used in reasoning about the represented events.
Collapse
|
6
|
O'Neill K, Henne P, Bello P, Pearson J, De Brigard F. Confidence and gradation in causal judgment. Cognition 2022; 223:105036. [DOI: 10.1016/j.cognition.2022.105036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 12/20/2021] [Accepted: 01/17/2022] [Indexed: 11/03/2022]
|
7
|
Johnson-Laird PN, Khemlani S. What happened to the “new paradigm”? A commentary on Knauff and Gazzo Castañeda (2022). THINKING & REASONING 2022. [DOI: 10.1080/13546783.2021.2022532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- P. N. Johnson-Laird
- Department of Psychology, Princeton University, Princeton, NJ, USA
- Department of Psychology, New York University, New York, NY, USA
| | - Sangeet Khemlani
- Navy Center for Applied Research in Artificial Intelligence, US Naval Research Laboratory, Washington, DC, USA
| |
Collapse
|
8
|
Yin P, Sun J. Is causation deterministic or probabilistic? A critique of Frosch and Johnson-Laird (2011). JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1963265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Pengfei Yin
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, School of Psychology, Shaanxi Normal University, Xi’an, People’s Republic of China
| | - Jinrui Sun
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, School of Psychology, Shaanxi Normal University, Xi’an, People’s Republic of China
| |
Collapse
|
9
|
Gerstenberg T, Stephan S. A counterfactual simulation model of causation by omission. Cognition 2021; 216:104842. [PMID: 34303272 DOI: 10.1016/j.cognition.2021.104842] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 07/02/2021] [Accepted: 07/06/2021] [Indexed: 01/06/2023]
Abstract
When do people say that an event that did not happen was a cause? We extend the counterfactual simulation model (CSM) of causal judgment (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2021) and test it in a series of three experiments that look at people's causal judgments about omissions in dynamic physical interactions. The problem of omissive causation highlights a series of questions that need to be answered in order to give an adequate causal explanation of why something happened: what are the relevant variables, what are their possible values, how are putative causal relationships evaluated, and how is the causal responsibility for an outcome attributed to multiple causes? The CSM predicts that people make causal judgments about omissions in physical interactions by using their intuitive understanding of physics to mentally simulate what would have happened in relevant counterfactual situations. Prior work has argued that normative expectations affect judgments of omissive causation. Here we suggest a concrete mechanism of how this happens: expectations affect what counterfactuals people consider, and the more certain people are that the counterfactual outcome would have been different from what actually happened, the more causal they judge the omission to be. Our experiments show that both the structure of the physical situation as well as expectations about what will happen affect people's judgments.
Collapse
|
10
|
Sebben S, Ullrich J. Can conditionals explain explanations? A modus ponens model of B because A. Cognition 2021; 215:104812. [PMID: 34246085 DOI: 10.1016/j.cognition.2021.104812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 06/02/2021] [Accepted: 06/11/2021] [Indexed: 11/19/2022]
Abstract
We suggest a normative model for the evaluation of explanations B because A based on probabilistic conditional reasoning and compare it with empirical data. According to the modus ponens model of explanations, the probability of B because A should equal the joint probability of the conditional if A then B and the explanans A. We argue that B because A expresses the conjunction of A and B as well as positive relevance of A for B. In Study 1, participants (N = 80) judged the subjective probabilities of 20 sets of statements with a focus on belief-based reasoning under uncertainty. In Study 2, participants (N = 376) were assigned to one of six item sets for which we varied the inferential relevance of A for B to explore boundary conditions of our model. We assessed the performance of our model across a range of analyses and report results on the Equation, a fundamental model in research on probabilistic reasoning concerning the evaluation of conditionals. In both studies, results indicate that participants' belief in statements B because A followed model predictions systematically. However, a sizeable proportion of sets of beliefs contained at least one incoherence, indicating deviations from the norms of rationality suggested by our model. In addition, results of Study 2 lend support to the idea that inferential relevance may be relevant for the evaluation of both conditionals and explanations.
Collapse
Affiliation(s)
- Simone Sebben
- Department of Psychology, University of Zurich, Switzerland.
| | | |
Collapse
|
11
|
Henne P, Kulesza A, Perez K, Houcek A. Counterfactual thinking and recency effects in causal judgment. Cognition 2021; 212:104708. [PMID: 33819848 DOI: 10.1016/j.cognition.2021.104708] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 03/03/2021] [Accepted: 03/26/2021] [Indexed: 11/19/2022]
Abstract
People tend to judge more recent events, relative to earlier ones, as the cause of some particular outcome. For instance, people are more inclined to judge that the last basket, rather than the first, caused the team to win the basketball game. This recency effect, however, reverses in cases of overdetermination: people judge that earlier events, rather than more recent ones, caused the outcome when the event is individually sufficient but not individually necessary for the outcome. In five experiments (N = 5507), we find evidence for the recency effect and the primacy effect for causal judgment. Traditionally, these effects have been a problem for counterfactual views of causal judgment. However, we argue that an extension of a recent counterfactual model of causal judgment explains both the recency and the primacy effect. In line with the predictions of our extended counterfactual model, we also find that, regardless of causal structure, people tend to imagine the counterfactual alternative to the more recent event rather than to the earlier one. Moreover, manipulating this tendency affects causal judgments in the ways predicted by this extended model: asking participants to imagine the counterfactual alternative to the earlier event weakens the interaction between recency and causal structure, and asking participants to imagine the counterfactual alternative to the more recent event strengthens the interaction between recency and causal structure. We discuss these results in relation to work on counterfactual thinking, causal modeling, and late-preemption.
Collapse
Affiliation(s)
- Paul Henne
- Department of Philosophy, Neuroscience Program at Lake Forest College, USA.
| | | | | | | |
Collapse
|
12
|
Henne P, O'Neill K, Bello P, Khemlani S, De Brigard F. Norms Affect Prospective Causal Judgments. Cogn Sci 2021; 45:e12931. [PMID: 33415814 DOI: 10.1111/cogs.12931] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Revised: 10/08/2020] [Accepted: 11/26/2020] [Indexed: 11/30/2022]
Abstract
People more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some outcome. Until recently, this abnormal-selection effect has been studied using retrospective vignette-based paradigms. We use a novel set of video stimuli to investigate this effect for prospective causal judgments-that is, judgments about the cause of some future outcome. Four experiments show that people more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some future outcome. We show that the abnormal-selection effects are not primarily explained by the perception of agency (Experiment 4). We discuss these results in relation to recent efforts to model causal judgment.
Collapse
Affiliation(s)
- Paul Henne
- Department of Philosophy, Lake Forest College.,Neuroscience Program, Lake Forest College
| | | | - Paul Bello
- Navy Center for Applied Research in Artificial Intelligence, Naval Research Laboratory
| | - Sangeet Khemlani
- Navy Center for Applied Research in Artificial Intelligence, Naval Research Laboratory
| | - Felipe De Brigard
- Center for Cognitive Neuroscience, Duke University.,Department of Philosophy, Duke University.,Department of Psychology and Neuroscience, Duke University.,Duke Institute for Brain Sciences, Duke University
| |
Collapse
|
13
|
Khemlani S, Johnson-Laird PN. Causal conflicts produce domino effects. Q J Exp Psychol (Hove) 2020; 73:2317-2327. [PMID: 32967522 DOI: 10.1177/1747021820958416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Inconsistent beliefs call for revision-but which of them should individuals revise? A long-standing view is that they should make minimal changes that restore consistency. An alternative view is that their primary task is to explain how the inconsistency arose. Hence, they are likely to violate minimalism in two ways: they should infer more information than is strictly necessary to establish consistency and they should reject more information than is strictly necessary to establish consistency. Previous studies corroborated the first effect: reasoners use causal simulations to build explanations that resolve inconsistencies. Here, we show that the second effect is true too: they use causal simulations to reject more information than is strictly necessary to establish consistency. When they abandon a cause, the effects of the cause topple like dominos: Reasoners tend to deny the occurrence of each subsequent event in the chain. Four studies corroborated this prediction.
Collapse
Affiliation(s)
- Sangeet Khemlani
- Navy Center for Applied Research in Artificial Intelligence, U.S. Naval Research Laboratory, Washington, DC, USA
| | - P N Johnson-Laird
- Princeton University, Princeton, NJ, USA.,New York University, New York, NY, USA
| |
Collapse
|
14
|
Korman J, Khemlani S. Explanatory completeness. Acta Psychol (Amst) 2020; 209:103139. [PMID: 32750561 DOI: 10.1016/j.actpsy.2020.103139] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 06/16/2020] [Accepted: 07/12/2020] [Indexed: 10/23/2022] Open
Abstract
All explanations are incomplete, but reasoners think some explanations are more complete than others. To explain this behavior, we propose a novel theory of how people assess explanatory incompleteness. The account assumes that reasoners represent explanations as causal mental models - iconic representations of possible arrangements of causes and effects. A complete explanation refers to a single integrated model, whereas an incomplete explanation refers to multiple models. The theory predicts that if there exists an unspecified causal relation - a gap - anywhere within an explanation, reasoners must maintain multiple models to handle the gap. They should treat such explanations as less complete than those without a gap. Four experiments provided participants with causal descriptions, some of which yield one explanatory model, e.g., A causes B and B causes C, and some of which demand multiple models, e.g., A causes X and B causes C. Participants across the studies preferred one-model descriptions to multiple-model ones on tasks that implicitly and explicitly required them to assess explanatory completeness. The studies corroborate the theory. They are the first to reveal the mental processes that underlie the assessment of explanatory completeness. We conclude by reviewing the theory in light of extant accounts of causal reasoning.
Collapse
|
15
|
Davis ZJ, Rehder B. A Process Model of Causal Reasoning. Cogn Sci 2020; 44:e12839. [PMID: 32419205 DOI: 10.1111/cogs.12839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 03/16/2020] [Accepted: 04/06/2020] [Indexed: 11/29/2022]
Abstract
How do we make causal judgments? Many studies have demonstrated that people are capable causal reasoners, achieving success on tasks from reasoning to categorization to interventions. However, less is known about the mental processes used to achieve such sophisticated judgments. We propose a new process model-the mutation sampler-that models causal judgments as based on a sample of possible states of the causal system generated using the Metropolis-Hastings sampling algorithm. Across a diverse array of tasks and conditions encompassing over 1,700 participants, we found that our model provided a consistently closer fit to participant judgments than standard causal graphical models. In particular, we found that the biases introduced by mutation sampling accounted for people's consistent, predictable errors that the normative model by definition could not. Moreover, using a novel experimental methodology, we found that those biases appeared in the samples that participants explicitly judged to be representative of a causal system. We conclude by advocating sampling methods as plausible process-level accounts of the computations specified by the causal graphical model framework and highlight opportunities for future research to identify not just what reasoners compute when drawing causal inferences, but also how they compute it.
Collapse
Affiliation(s)
| | - Bob Rehder
- Department of Psychology, New York University
| |
Collapse
|
16
|
Overview of Mental Models research using bibliometric indicators. Cogn Process 2020; 21:155-165. [PMID: 32056028 DOI: 10.1007/s10339-020-00956-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 01/29/2020] [Indexed: 10/25/2022]
Abstract
The Mental Model concept has evolved from being a representation of reality to which we apply formal logic, to a type of logic with which we make decisions, learn, and adapt. This work uses bibliometric indicators to describe research on Mental Models from 1997 to 2017. The results show progressive growth since the late 1990s and a stationary trend starting in 2010. The existing research is dominated by the fields of individual and organizational psychology as well as education. Since 2007, a corpus of research (works that continue or are based on previous work) has been developed around the themes of memory, cognition, interpretation, and Johnson-Laird's work. In the late 2000s, another possible corpus emerged around team work. It is recommended to develop similar research in specific areas.
Collapse
|
17
|
Schon C, Siebert S, Stolzenburg F. The CoRg Project: Cognitive Reasoning. KUNSTLICHE INTELLIGENZ 2019. [DOI: 10.1007/s13218-019-00601-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
18
|
Johnson-Laird PN, Ragni M. Possibilities as the foundation of reasoning. Cognition 2019; 193:103950. [PMID: 31374514 DOI: 10.1016/j.cognition.2019.04.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 04/03/2019] [Accepted: 04/22/2019] [Indexed: 10/26/2022]
Abstract
Reasoning about possibilities is fundamental in daily life. Yet, it has been little studied in psychology. We present a psychological theory in which it is the foundation of human reasoning. The theory explains how possibilities have distinct interpretations (deontic, epistemic, and alethic), how people represent them in models, and how these models yield inferences. Key principles are that the semantics of possibilities are the same finitary alternatives underlying probabilities, that speech acts can create obligations inexpressible as probabilities, that compound assertions - conditionals and disjunctions - refer to conjunctions of possibilities holding in default of knowledge to the contrary, and that mental models condense multiple consistent possibilities into one. The theory is incompatible with all normal modal logics and with probabilistic logic. Yet, experiments have corroborated its predictions. The article discusses its precursors, rivals, and potentials.
Collapse
Affiliation(s)
- P N Johnson-Laird
- Department of Psychology, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, New York University, 6 Washington Place, New York, NY 10003, USA.
| | - Marco Ragni
- Cognitive Computation Lab, Institut für Informatik, Technische Fakultät, Albert-Ludwigs-Universität Freiburg, Georges-Köhler-Allee, Geb. 052, 79110 Freiburg, Germany
| |
Collapse
|
19
|
|
20
|
Quelhas AC, Rasga C, Johnson-Laird PN. The Relation Between Factual and Counterfactual Conditionals. Cogn Sci 2018; 42:2205-2228. [PMID: 29998570 DOI: 10.1111/cogs.12663] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 05/01/2018] [Accepted: 06/13/2018] [Indexed: 11/29/2022]
Abstract
What is the relation between factual conditionals: If A happened then B happened, and counterfactual conditionals: If A had happened then B would have happened? Some theorists propose quite different semantics for the two. In contrast, the theory of mental models and its computer implementation interrelates them. It postulates that both can have a priori truth values, and that the semantic bases of both are possibilities: states that are possible for factual conditionals, and that were once possible but that did not happen for counterfactual conditionals. Two experiments supported these relations. Experiment 1 showed that, like factual conditionals, certain counterfactuals are true a priori, and others are false a priori. Experiment 2 replicated this result and showed that participants selected appropriate paraphrases, referring, respectively, to real and to counterfactual possibilities, for the two sorts of conditional. These results are contrary to alternative accounts of conditionals.
Collapse
Affiliation(s)
- Ana Cristina Quelhas
- ISPA-Instituto Universitário, Applied Psychology Research Center, Capabilities & Inclusion
| | - Célia Rasga
- ISPA-Instituto Universitário, Applied Psychology Research Center, Capabilities & Inclusion
| | - P N Johnson-Laird
- Department of Psychology, Princeton University.,Department of Psychology, New York University
| |
Collapse
|
21
|
Khemlani SS, Byrne RMJ, Johnson-Laird PN. Facts and Possibilities: A Model-Based Theory of Sentential Reasoning. Cogn Sci 2018; 42:1887-1924. [PMID: 29968343 DOI: 10.1111/cogs.12634] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2017] [Revised: 04/17/2018] [Accepted: 05/03/2018] [Indexed: 11/28/2022]
Abstract
This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions, such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (or nonmonotonic). The theory is a dual process one: It distinguishes between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities.
Collapse
Affiliation(s)
- Sangeet S Khemlani
- Navy Center for Applied Research in Artificial Intelligence, US Naval Research Laboratory
| | - Ruth M J Byrne
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, University of Dublin
| | - Philip N Johnson-Laird
- Department of Psychology, Princeton University
- Department of Psychology, New York University
| |
Collapse
|
22
|
Vasilyeva N, Blanchard T, Lombrozo T. Stable Causal Relationships Are Better Causal Relationships. Cogn Sci 2018; 42:1265-1296. [DOI: 10.1111/cogs.12605] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 02/01/2018] [Accepted: 02/06/2018] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Tania Lombrozo
- Department of Psychology; University of California Berkeley
| |
Collapse
|
23
|
Abstract
Although many theories of causal cognition are based on causal graphical models, a key property of such models-the independence relations stipulated by the Markov condition-is routinely violated by human reasoners. This article presents three new accounts of those independence violations, accounts that share the assumption that people's understanding of the correlational structure of data generated from a causal graph differs from that stipulated by causal graphical model framework. To distinguish these models, experiments assessed how people reason with causal graphs that are larger than those tested in previous studies. A traditional common cause network (Y1←X→Y2) was extended so that the effects themselves had effects (Z1←Y1←X→Y2→Z2). A traditional common effect network (Y1→X←Y2) was extended so that the causes themselves had causes (Z1→Y1→X←Y2←Z2). Subjects' inferences were most consistent with the beta-Q model in which consistent states of the world-those in which variables are either mostly all present or mostly all absent-are viewed as more probable than stipulated by the causal graphical model framework. Substantial variability in subjects' inferences was also observed, with the result that substantial minorities of subjects were best fit by one of the other models (the dual prototype or a leaky gate models). The discrepancy between normative and human causal cognition stipulated by these models is foundational in the sense that they locate the error not in people's causal reasoning but rather in their causal representations. As a result, they are applicable to any cognitive theory grounded in causal graphical models, including theories of analogy, learning, explanation, categorization, decision-making, and counterfactual reasoning. Preliminary evidence that independence violations indeed generalize to other judgment types is presented.
Collapse
Affiliation(s)
- Bob Rehder
- Department of Psychology, New York University, United States.
| |
Collapse
|
24
|
Iran-Nejad A, Bordbar F. Biofunctional Understanding and Conceptual Control: Searching for Systematic Consensus in Systemic Cohesion. Front Psychol 2017; 8:1702. [PMID: 29114235 PMCID: PMC5660850 DOI: 10.3389/fpsyg.2017.01702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2017] [Accepted: 09/15/2017] [Indexed: 11/13/2022] Open
Abstract
For first generation scientists after the cognitive revolution, knowers were in active control over all (stages of) information processing. Then, following a decade of transition shaped by intense controversy, embodied cognition emerged and suggested sources of control other than those implied by metaphysical information processing. With a thematic focus on embodiment science and an eye toward systematic consensus in systemic cohesion, the present study explores the roles of biofunctional and conceptual control processes in the wholetheme spiral of biofunctional understanding (see Iran-Nejad and Irannejad, 2017b, Figure 1). According to this spiral, each of the two kinds of understanding has its own unique set of knower control processes. For conceptual understanding (CU), knowers have deliberate attention-allocation control over their first-person "knowthat" and "knowhow" content combined as mutually coherent corequisites. For biofunctional understanding (BU), knowers have attention-allocation control only over their knowthat content but knowhow control content is ordinarily conspicuously absent. To test the hypothesis of differences in the manner of control between CU and BU, participants in two experiments read identical-format statements for internal consistency, as response time was recorded. The results of Experiment 1 supported the hypothesis of differences in the manner of control between the two types of control processes; and Experiment 2 confirmed the results of Experiment 1. These findings are discussed in terms of the predicted differences between BU and CU control processes, their roles in regulating the physically unobservable flow of systemic cohesion in the wholetheme spiral, and a proposal for systematic consensus in systemic cohesion to serve as the second guiding principle in biofunctional embodiment science next to physical science's first guiding principle of systematic observation.
Collapse
Affiliation(s)
- Asghar Iran-Nejad
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, The University of Alabama, Tuscaloosa, AL, United States
| | | |
Collapse
|
25
|
Berlin R, Gruen R, Best J. Systems Medicine-Complexity Within, Simplicity Without. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2017; 1:119-137. [PMID: 28713872 PMCID: PMC5491616 DOI: 10.1007/s41666-017-0002-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Revised: 04/12/2017] [Accepted: 04/25/2017] [Indexed: 12/14/2022]
Abstract
This paper presents a brief history of Systems Theory, progresses to Systems Biology, and its relation to the more traditional investigative method of reductionism. The emergence of Systems Medicine represents the application of Systems Biology to disease and clinical issues. The challenges faced by this transition from Systems Biology to Systems Medicine are explained; the requirements of physicians at the bedside, caring for patients, as well as the place of human-human interaction and the needs of the patients are addressed. An organ-focused transition to Systems Medicine, rather than a genomic-, molecular-, or cell-based effort is emphasized. Organ focus represents a middle-out approach to ease this transition and to maximize the benefits of scientific discovery and clinical application. This method manages the perceptions of time and space, the massive amounts of human- and patient-related data, and the ensuing complexity of information.
Collapse
Affiliation(s)
- Richard Berlin
- Department of Computer Science, University of Illinois, Urbana, IL USA
| | - Russell Gruen
- Nanyang Institute of Technology in Health and Medicine, Department of Surgery, Lee Kong Chian School of Medicine, Singapore, Singapore
| | - James Best
- Lee Kong Chian School of Medicine, Singapore, Singapore
| |
Collapse
|
26
|
Ward MK, Volk S, Becker WJ. An Overview of Organizational Neuroscience. ACTA ACUST UNITED AC 2015. [DOI: 10.1108/s1479-357120150000007001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
27
|
Johnson-Laird P, Khemlani SS, Goodwin GP. Response to Baratgin et al .: Mental Models Integrate Probability and Deduction. Trends Cogn Sci 2015; 19:548-549. [DOI: 10.1016/j.tics.2015.06.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2015] [Accepted: 06/30/2015] [Indexed: 11/15/2022]
|
28
|
Johnson-Laird P, Khemlani SS, Goodwin GP. Logic, probability, and human reasoning. Trends Cogn Sci 2015; 19:201-14. [DOI: 10.1016/j.tics.2015.02.006] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Revised: 02/03/2015] [Accepted: 02/09/2015] [Indexed: 10/23/2022]
|
29
|
Abstract
Causal composition allows people to generate new causal relations by combining existing causal knowledge. We introduce a new computational model of such reasoning, the force theory, which holds that people compose causal relations by simulating the processes that join forces in the world, and compare this theory with the mental model theory (Khemlani et al., 2014) and the causal model theory (Sloman et al., 2009), which explain causal composition on the basis of mental models and structural equations, respectively. In one experiment, the force theory was uniquely able to account for people's ability to compose causal relationships from complex animations of real-world events. In three additional experiments, the force theory did as well as or better than the other two theories in explaining the causal compositions people generated from linguistically presented causal relations. Implications for causal learning and the hierarchical structure of causal knowledge are discussed.
Collapse
Affiliation(s)
- Phillip Wolff
- Department of Psychology, Emory UniversityAtlanta, GA, USA
| | - Aron K. Barbey
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-ChampaignUrbana, IL, USA
| |
Collapse
|