1
|
Zemla JC, Sloman SA, Bechlivanidis C, Lagnado DA. Not so simple! Causal mechanisms increase preference for complex explanations. Cognition 2023; 239:105551. [PMID: 37478697 DOI: 10.1016/j.cognition.2023.105551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/23/2023] [Accepted: 07/07/2023] [Indexed: 07/23/2023]
Abstract
Mechanisms play a central role in how we think about causality, yet not all causal explanations describe mechanisms. Across five experiments, we find that people evaluate explanations differently depending on whether or not they include mechanisms. Despite common wisdom suggesting that explanations ought to be simple in the sense of appealing to as few causes as necessary to explain an effect, the literature is divided over whether people adhere to this principle. Our findings suggest that the presence of causal mechanisms in an explanation is one factor that reduces adherence. While competing explanations are often judged based on their probability of being correct, mechanisms afford a different way of evaluating explanations: They describe the underlying nature of causal relations. Complex explanations (appealing to multiple causes) contain more causal relations and thus allow for more mechanistic information, providing a fuller account of the causal network and promoting a greater sense of understanding.
Collapse
Affiliation(s)
- Jeffrey C Zemla
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13244, United States of America.
| | - Steven A Sloman
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer St., Providence, RI 02912, United States of America
| | - Christos Bechlivanidis
- Department of Experiment Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - David A Lagnado
- Department of Experiment Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| |
Collapse
|
2
|
Kolvoort IR, Temme N, van Maanen L. The Bayesian Mutation Sampler Explains Distributions of Causal Judgments. Open Mind (Camb) 2023; 7:318-349. [PMID: 37416078 PMCID: PMC10320818 DOI: 10.1162/opmi_a_00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 04/25/2023] [Indexed: 07/08/2023] Open
Abstract
One consistent finding in the causal reasoning literature is that causal judgments are rather variable. In particular, distributions of probabilistic causal judgments tend not to be normal and are often not centered on the normative response. As an explanation for these response distributions, we propose that people engage in 'mutation sampling' when confronted with a causal query and integrate this information with prior information about that query. The Mutation Sampler model (Davis & Rehder, 2020) posits that we approximate probabilities using a sampling process, explaining the average responses of participants on a wide variety of tasks. Careful analysis, however, shows that its predicted response distributions do not match empirical distributions. We develop the Bayesian Mutation Sampler (BMS) which extends the original model by incorporating the use of generic prior distributions. We fit the BMS to experimental data and find that, in addition to average responses, the BMS explains multiple distributional phenomena including the moderate conservatism of the bulk of responses, the lack of extreme responses, and spikes of responses at 50%.
Collapse
Affiliation(s)
- Ivar R. Kolvoort
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Institute for Logic, Language, and Computation, University of Amsterdam, Amsterdam, The Netherlands
| | - Nina Temme
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Leendert van Maanen
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
3
|
Causal reasoning without mechanism. PLoS One 2022; 17:e0268219. [PMID: 35560140 PMCID: PMC9106179 DOI: 10.1371/journal.pone.0268219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Accepted: 04/25/2022] [Indexed: 11/19/2022] Open
Abstract
Unobservable mechanisms that tie causes to their effects generate observable events. How can one make inferences about hidden causal structures? This paper introduces the domain-matching heuristic to explain how humans perform causal reasoning when lacking mechanistic knowledge. We posit that people reduce the otherwise vast space of possible causal relations by focusing only on the likeliest ones. When thinking about a cause, people tend to think about possible effects that participate in the same domain, and vice versa. To explore the specific domains that people use, we asked people to cluster artifacts. The analyses revealed three commonly employed mechanism domains: the mechanical, chemical, and electromagnetic. Using these domains, we tested the domain-matching heuristic by testing adults’ and children’s causal attribution, prediction, judgment, and subjective understanding. We found that people’s responses conform with domain-matching. These results provide evidence for a heuristic that explains how people engage in causal reasoning without directly appealing to mechanistic or probabilistic knowledge.
Collapse
|
4
|
Park J, McGillivray S, Bye JK, Cheng PW. Causal invariance as a tacit aspiration: Analytic knowledge of invariance functions. Cogn Psychol 2021; 132:101432. [PMID: 34861583 DOI: 10.1016/j.cogpsych.2021.101432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 08/18/2021] [Accepted: 09/09/2021] [Indexed: 11/03/2022]
Abstract
For causal knowledge to be worth learning, it must remain valid when that knowledge is applied. Because unknown background causes are potentially present, and may vary across the learning and application contexts, extricating the strength of a candidate cause requires an assumption regarding the decomposition of the observed outcome into the unobservable influences from the candidate and from background causes. Acquiring stable, useable causal knowledge is challenging when the search space of candidate causes is large, such that the reasoner's current set of candidates may fail to include a cause that generalizes well to an application context. We have hypothesized that an indispensable navigation device that shapes our causal representations toward useable knowledge involves the concept of causal invariance - the sameness of how a cause operates to produce an effect across contexts. Here, we tested our causal invariance hypothesis by making use of the distinct mathematical functions expressing causal invariance for two outcome-variable types: continuous and binary. Our hypothesis predicts that, given identical prior domain knowledge, intuitive causal judgments should vary in accord with the causal-invariance function for a reasoner's perceived outcome-variable type. The judgments are made as if the reasoner aspires to formulate causally invariant knowledge. Our experiments involved two cue-competition paradigms: blocking and overexpectation. Results show that adult humans tacitly use the appropriate causal-invariance functions for decomposition. Our analysis offers an explanation for the apparent elusiveness of the blocking effect and the adaptiveness of intuitive causal inference to the representation-dependent reality in the mind.
Collapse
|
5
|
Sloman SA. How Do We Believe? Top Cogn Sci 2021; 14:31-44. [PMID: 34792846 DOI: 10.1111/tops.12580] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 09/16/2021] [Accepted: 09/29/2021] [Indexed: 11/29/2022]
Abstract
My first 30-odd years of research in cognitive science has been driven by an attempt to balance two facts about human thought that seem incompatible and two corresponding ways of understanding information processing. The facts are that, on one hand, human memories serve as sophisticated pattern recognition devices with great flexibility and an ability to generalize and predict as long as circumstances remain sufficiently familiar. On the other hand, we are capable of deploying an enormous variety of representational schemes that map closely onto articulable structure in the world and that support explanation even in unfamiliar circumstances. The contrasting ways of modeling such processes involve, first, more and more sophisticated associative models that capture progressively higher-order statistical structure and, second, more powerful representational languages for other sorts of structure, especially compositional and causal structure. My efforts to rectify these forces have taken me from the study of memory to induction and category knowledge to causal reasoning. In the process, I have consistently appealed to dual systems of thinking. I have come to realize that a key reason for our success as cognizers is that we rely on others for most of our information processing needs; we live in a community of knowledge. We make use of others both intuitively-by outsourcing much of our thinking without knowing we are doing it-and by deliberating with others.
Collapse
Affiliation(s)
- Steven A Sloman
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University
| |
Collapse
|
6
|
Stephan S, Waldmann MR. The role of mechanism knowledge in singular causation judgments. Cognition 2021; 218:104924. [PMID: 34673301 DOI: 10.1016/j.cognition.2021.104924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 09/28/2021] [Accepted: 09/29/2021] [Indexed: 11/16/2022]
Abstract
Singular causation queries (e.g., "Did Mary's taking contraceptives cause her thrombosis?") are ubiquitous in everyday life and crucial in many professional disciplines, such as medicine or law. Knowledge about general causal regularities is necessary but not sufficient for establishing a singular causation relation because it is possible that co-occurrences consistent with known regularities are in an individual case still just coincidental. Thus, further cues are helpful to establish a singular causation relation. In the present research we focus on information about mechanisms as a potent cue. While previous studies have shown that reasoners consider mechanism information as important when it comes to answering singular causation queries, no formal model has been proposed that explains why this is case. We here present a computational model that explains how causal mechanism information affects singular causation judgments. We also use the model to identify conditions that restrict the utility of mechanism information. We report three experiments testing the implications of our formal analysis. In Experiment 1 we found that reasoners systematically use mechanism information, largely in accordance with our formal model, although we also discovered that some people seem to rely on simpler, computationally less demanding reasoning strategies. The results of Experiments 2 and 3 demonstrate that reasoners have a tentative understanding of the conditions that restrict the utility of causal mechanism information.
Collapse
|
7
|
Wang M, Sun J. A situation-modulated minimal change account for causal inferences about causal networks. Q J Exp Psychol (Hove) 2020; 73:2403-2411. [DOI: 10.1177/1747021820965226] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although causal Bayes networks are applicable to examining causal inferences about different static objects and about a changing object with different states, previous studies investigated the former, but not the latter. We propose a situation-modulated minimal change account for causal inferences. It predicts that dynamic situations are more likely to elicit minimal revisions on causal networks and adherence to the Markov assumption than static situations. Two experiments were conducted to investigate qualitative causal inferences about causal networks with binary and numerical variables, respectively. It was found that qualitative causal inferences were more likely to adhere to the Markov assumption in dynamic situations than in static situations. This finding supports the situation-modulated minimal change account rather than the other alternative accounts. We conclude that dynamic situations are more likely to elicit minimal revisions on causal networks and adherence to the Markov assumption than static situations. This conclusion is beyond the previous predominant view that causal inferences are apt to violate the Markov assumption.
Collapse
Affiliation(s)
- Moyun Wang
- School of Psychology, Shaanxi Normal University, Xi’an, China
| | - Jinrui Sun
- School of Psychology, Shaanxi Normal University, Xi’an, China
| |
Collapse
|
8
|
Davis ZJ, Rehder B. A Process Model of Causal Reasoning. Cogn Sci 2020; 44:e12839. [PMID: 32419205 DOI: 10.1111/cogs.12839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 03/16/2020] [Accepted: 04/06/2020] [Indexed: 11/29/2022]
Abstract
How do we make causal judgments? Many studies have demonstrated that people are capable causal reasoners, achieving success on tasks from reasoning to categorization to interventions. However, less is known about the mental processes used to achieve such sophisticated judgments. We propose a new process model-the mutation sampler-that models causal judgments as based on a sample of possible states of the causal system generated using the Metropolis-Hastings sampling algorithm. Across a diverse array of tasks and conditions encompassing over 1,700 participants, we found that our model provided a consistently closer fit to participant judgments than standard causal graphical models. In particular, we found that the biases introduced by mutation sampling accounted for people's consistent, predictable errors that the normative model by definition could not. Moreover, using a novel experimental methodology, we found that those biases appeared in the samples that participants explicitly judged to be representative of a causal system. We conclude by advocating sampling methods as plausible process-level accounts of the computations specified by the causal graphical model framework and highlight opportunities for future research to identify not just what reasoners compute when drawing causal inferences, but also how they compute it.
Collapse
Affiliation(s)
| | - Bob Rehder
- Department of Psychology, New York University
| |
Collapse
|
9
|
Abstract
Although many theories of causal cognition are based on causal graphical models, a key property of such models-the independence relations stipulated by the Markov condition-is routinely violated by human reasoners. This article presents three new accounts of those independence violations, accounts that share the assumption that people's understanding of the correlational structure of data generated from a causal graph differs from that stipulated by causal graphical model framework. To distinguish these models, experiments assessed how people reason with causal graphs that are larger than those tested in previous studies. A traditional common cause network (Y1←X→Y2) was extended so that the effects themselves had effects (Z1←Y1←X→Y2→Z2). A traditional common effect network (Y1→X←Y2) was extended so that the causes themselves had causes (Z1→Y1→X←Y2←Z2). Subjects' inferences were most consistent with the beta-Q model in which consistent states of the world-those in which variables are either mostly all present or mostly all absent-are viewed as more probable than stipulated by the causal graphical model framework. Substantial variability in subjects' inferences was also observed, with the result that substantial minorities of subjects were best fit by one of the other models (the dual prototype or a leaky gate models). The discrepancy between normative and human causal cognition stipulated by these models is foundational in the sense that they locate the error not in people's causal reasoning but rather in their causal representations. As a result, they are applicable to any cognitive theory grounded in causal graphical models, including theories of analogy, learning, explanation, categorization, decision-making, and counterfactual reasoning. Preliminary evidence that independence violations indeed generalize to other judgment types is presented.
Collapse
Affiliation(s)
- Bob Rehder
- Department of Psychology, New York University, United States.
| |
Collapse
|
10
|
Saporta-Sorozon K, Danziger S, Sloman S. Causal Models Drive Preference between Drugs that Treat a Focal versus Multiple Symptoms. JOURNAL OF BEHAVIORAL DECISION MAKING 2017. [DOI: 10.1002/bdm.1999] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
| | - Shai Danziger
- Coller School of Management; Tel Aviv University; Tel Aviv Israel
| | - Steven Sloman
- Cognitive, Linguistic, & Psychological Sciences; Brown University; Providence RI USA
| |
Collapse
|
11
|
Jarecki JB, Meder B, Nelson JD. Naïve and Robust: Class-Conditional Independence in Human Classification Learning. Cogn Sci 2017; 42:4-42. [PMID: 28574602 DOI: 10.1111/cogs.12496] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Revised: 09/19/2016] [Accepted: 11/18/2017] [Indexed: 11/30/2022]
Abstract
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference problem, allows for informed inferences about novel feature combinations, and performs robustly across different statistical environments. We designed a new Bayesian classification learning model (the dependence-independence structure and category learning model, DISC-LM) that incorporates varying degrees of prior belief in class-conditional independence, learns whether or not independence holds, and adapts its behavior accordingly. Theoretical results from two simulation studies demonstrate that classification behavior can appear to start simple, yet adapt effectively to unexpected task structures. Two experiments-designed using optimal experimental design principles-were conducted with human learners. Classification decisions of the majority of participants were best accounted for by a version of the model with very high initial prior belief in class-conditional independence, before adapting to the true environmental structure. Class-conditional independence may be a strong and useful default assumption in category learning tasks.
Collapse
Affiliation(s)
- Jana B Jarecki
- Department of Psychology, University of Basel.,Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development
| | - Björn Meder
- Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development
| | - Jonathan D Nelson
- Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development.,School of Psychology, University of Surrey
| |
Collapse
|
12
|
Discounting and Augmentation in Causal Conditional Reasoning: Causal Models or Shallow Encoding? PLoS One 2016; 11:e0167741. [PMID: 28030583 PMCID: PMC5193512 DOI: 10.1371/journal.pone.0167741] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 11/20/2016] [Indexed: 11/19/2022] Open
Abstract
Recent research comparing mental models theory and causal Bayes nets for their ability to account for discounting and augmentation inferences in causal conditional reasoning had some limitations. One of the experiments used an ordinal scale and multiple items and analysed the data by subjects and items. This procedure can create a variety of problems that can be resolved by using an appropriate cumulative link function mixed models approach in which items are treated as random effects. Experiment 1 replicated this earlier experiment and analysed the results using appropriate data analytic techniques. Although successfully replicating earlier research, the pattern of results could be explained by a much simpler "shallow encoding" hypothesis. Experiment 2 introduced a manipulation to critically test this hypothesis. The results favoured the causal Bayes nets predictions and not shallow encoding and were not consistent with mental models theory. Experiment 1 provided qualified support for the causal Bayes net approach using appropriate statistics because it also replicated the failure to observe one of the predicted main effects. Experiment 2 discounted one plausible explanation for this failure. While within the limited goals that were set for these experiments they were successful, more research is required to account for the pattern of findings using this paradigm.
Collapse
|
13
|
Rehder B. Reasoning With Causal Cycles. Cogn Sci 2016; 41 Suppl 5:944-1002. [PMID: 27859522 DOI: 10.1111/cogs.12447] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2016] [Revised: 06/09/2016] [Accepted: 08/01/2016] [Indexed: 12/01/2022]
Abstract
This article assesses how people reason with categories whose features are related in causal cycles. Whereas models based on causal graphical models (CGMs) have enjoyed success modeling category-based judgments as well as a number of other cognitive phenomena, CGMs are only able to represent causal structures that are acyclic. A number of new formalisms that allow cycles are introduced and evaluated. Dynamic Bayesian networks (DBNs) represent cycles by unfolding them over time. Chain graphs augment CGMs by allowing the presence of undirected links that model feedback relations between variables. Unfolded chain graphs are chain graphs that unfold over time. An existing model of causal cycles (alpha centrality) is also evaluated. Four experiments in which subjects reason about categories with cyclically related features provided evidence against DBNs and alpha centrality and for the two types of chain graphs. Chain graphs-a mechanism for representing the equilibrium distribution of a dynamic system-may thus be good candidates for modeling how people reason causally with complex systems. Applications of chain graphs to areas of cognition other than category-based judgments are discussed.
Collapse
Affiliation(s)
- Bob Rehder
- Department of Psychology, New York University
| |
Collapse
|
14
|
Failures of explaining away and screening off in described versus experienced causal learning scenarios. Mem Cognit 2016; 45:245-260. [DOI: 10.3758/s13421-016-0662-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
15
|
Do people reason rationally about causally related events? Markov violations, weak inferences, and failures of explaining away. Cogn Psychol 2016; 87:88-134. [PMID: 27261539 DOI: 10.1016/j.cogpsych.2016.05.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 04/28/2016] [Accepted: 05/07/2016] [Indexed: 11/23/2022]
Abstract
Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X1→Y→X2, common cause structures X1←Y→X2, and common effect structures X1→Y←X2, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative "explaining away" pattern). Compared to the normative account, in general, when the judgments should change, they change in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations.
Collapse
|
16
|
Waldmann MR, Mayrhofer R. Hybrid Causal Representations. PSYCHOLOGY OF LEARNING AND MOTIVATION 2016. [DOI: 10.1016/bs.plm.2016.04.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
17
|
Johnson SGB, Ahn WK. Causal Networks or Causal Islands? The Representation of Mechanisms and the Transitivity of Causal Judgment. Cogn Sci 2015; 39:1468-503. [PMID: 25556901 PMCID: PMC4490159 DOI: 10.1111/cogs.12213] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Revised: 06/01/2014] [Accepted: 09/23/2014] [Indexed: 11/30/2022]
Abstract
Knowledge of mechanisms is critical for causal reasoning. We contrasted two possible organizations of causal knowledge—an interconnected causal network, where events are causally connected without any boundaries delineating discrete mechanisms; or a set of disparate mechanisms—causal islands—such that events in different mechanisms are not thought to be related even when they belong to the same causal chain. To distinguish these possibilities, we tested whether people make transitive judgments about causal chains by inferring, given A causes B and B causes C, that A causes C. Specifically, causal chains schematized as one chunk or mechanism in semantic memory (e.g., exercising, becoming thirsty, drinking water) led to transitive causal judgments. On the other hand, chains schematized as multiple chunks (e.g., having sex, becoming pregnant, becoming nauseous) led to intransitive judgments despite strong intermediate links ((Experiments 1-3). Normative accounts of causal intransitivity could not explain these intransitive judgments (Experiments 4 and 5).
Collapse
|
18
|
Sobel DM, Legare CH. Causal learning in children. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 5:413-427. [PMID: 26308654 DOI: 10.1002/wcs.1291] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Revised: 01/30/2014] [Accepted: 03/12/2014] [Indexed: 11/11/2022]
Abstract
UNLABELLED How do children learn the causal structure of the environment? We first summarize a set of theories from the adult literature on causal learning, including associative models, parameter estimation theories, and causal structure learning accounts, as applicable to developmental science. We focus on causal graphical models as a description of children's causal knowledge, and the implications of this computational description for children's causal learning. We then examine the contributions of explanation and exploration to causal learning from a computational standpoint. Finally, we examine how children might learn causal knowledge from others and how computational and constructivist accounts of causal learning can be integrated. WIREs Cogn Sci 2014, 5:413-427. doi: 10.1002/wcs.1291 For further resources related to this article, please visit the WIREs website. CONFLICT OF INTEREST The authors have declared no conflicts of interest for this article.
Collapse
Affiliation(s)
- David M Sobel
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Cristine H Legare
- Department of Psychology, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
19
|
Abstract
Causal knowledge plays a crucial role in human thought, but the nature of causal representation and inference remains a puzzle. Can human causal inference be captured by relations of probabilistic dependency, or does it draw on richer forms of representation? This article explores this question by reviewing research in reasoning, decision making, various forms of judgment, and attribution. We endorse causal Bayesian networks as the best normative framework and as a productive guide to theory building. However, it is incomplete as an account of causal thinking. On the basis of a range of experimental work, we identify three hallmarks of causal reasoning-the role of mechanism, narrative, and mental simulation-all of which go beyond mere probabilistic knowledge. We propose that the hallmarks are closely related. Mental simulations are representations over time of mechanisms. When multiple actors are involved, these simulations are aggregated into narratives.
Collapse
Affiliation(s)
- Steven A Sloman
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912;
| | | |
Collapse
|
20
|
Rehder B. Independence and dependence in human causal reasoning. Cogn Psychol 2014; 72:54-107. [DOI: 10.1016/j.cogpsych.2014.02.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2013] [Revised: 02/05/2014] [Accepted: 02/11/2014] [Indexed: 10/25/2022]
|
21
|
Mayrhofer R, Waldmann MR. Agents and causes: dispositional intuitions as a guide to causal structure. Cogn Sci 2014; 39:65-95. [PMID: 24831193 DOI: 10.1111/cogs.12132] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2012] [Revised: 09/23/2013] [Accepted: 09/27/2013] [Indexed: 11/30/2022]
Abstract
Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.
Collapse
|
22
|
Abstract
Previous work has shown that predictions can be mediated by mechanistic beliefs. The present study shows that such mediation only occurs in the face of contradictory, and not corroborative, evidence. In four experiments, we presented participants with causal statements describing a common-cause structure (E1 ← C → E2). Then we informed them of the states of C and E1 and asked them to judge the likelihood of E2. In Experiments 1 and 2, we manipulated whether the mechanisms supporting the two effects were the same or different, and whether the evidence presented confirmed or contradicted the participants' expectations. The relation between the mechanisms only influenced predictions when evidence contradicted the expectations, but not when it was consistent. In Experiments 3 and 4, we used a common-cause structure with identical mechanisms. We manipulated the order in which predictions were made. When confirmatory predictions were made before contradictory predictions, mechanistic modulation was not observed in the confirmatory case. In contrast, the modulation was found when confirmatory predictions were made after contradictory ones. The results support the contradiction hypothesis that causal structure is revised during prediction, but only in the face of unexpected evidence.
Collapse
|