1
|
Nickerson RS, Butler SF, Barch DH. Looking behind: Turning cards in the selection task. Q J Exp Psychol (Hove) 2021; 74:1451-1464. [PMID: 33629644 DOI: 10.1177/17470218211001293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Wason's selection task requires that one imagine which of four cards, each of which has a letter on one side and a number on the other, one would have to turn over to determine whether a statement about the cards is true or false. For example, one might see four cards showing T, H, 6, and 4 and be asked to say which card or cards one would have to turn over to determine whether a statement in the form of If a card has T on one side, it has 4 on the other is true. In the great majority of experiments with this task no cards are actually turned. This limits the conclusions that can be drawn from experimental results. In two experiments participants actually turned (had a computer turn) virtual cards so as to show what they contained on their originally hidden sides. Participants were given a monetary incentive to do well on the task, and they performed it, with trial-by-trial feedback, many times. Performance was much better than is typically obtained with the more common way of performing the task. Results also demonstrate the importance of the precise wording of the statement to be evaluated and how a misinterpretation could help account for a tendency for people to turn only a single card even when the turning of two is required. Results prompt several questions of a theoretical nature and are discussed as they relate to recent theoretical treatments of the selection task.
Collapse
|
2
|
Liefgreen A, Pilditch T, Lagnado D. Strategies for selecting and evaluating information. Cogn Psychol 2020; 123:101332. [PMID: 32977167 DOI: 10.1016/j.cogpsych.2020.101332] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 07/08/2020] [Accepted: 07/16/2020] [Indexed: 10/23/2022]
Abstract
Within the domain of psychology, Optimal Experimental Design (OED) principles have been used to model how people seek and evaluate information. Despite proving valuable as computational-level methods to account for people's behaviour, their descriptive and explanatory powers remain largely unexplored. In a series of experiments, we used a naturalistic crime investigation scenario to examine how people evaluate queries, as well as outcomes, in probabilistic contexts. We aimed to uncover the psychological strategies that people use, not just to assess whether they deviated from OED principles. In addition, we explored the adaptiveness of the identified strategies across both one-shot and stepwise information search tasks. We found that people do not always evaluate queries strictly in OED terms and use distinct strategies, such as by identifying a leading contender at the outset. Moreover, we identified aspects of zero-sum thinking and risk aversion that interact with people's information search strategies. Our findings have implications for building a descriptive account of information seeking and evaluation, accounting for factors that currently lie outside the realm of information-theoretic OED measures, such as context and the learner's own preferences.
Collapse
Affiliation(s)
- Alice Liefgreen
- Department of Experimental Psychology, University College London, UK.
| | - Toby Pilditch
- Department of Experimental Psychology, University College London, UK; University of Oxford, School of Geography and the Environment, Oxford, UK
| | - David Lagnado
- Department of Experimental Psychology, University College London, UK
| |
Collapse
|
3
|
Abstract
Five experiments are reported based upon Evans'(1996) inspection time paradigm in which subjects are required to solve computer-presented Wason Selection Task problems while simultaneously using a mouse to indicate which card is currently under consideration. It had previously been found that selected cards were inspected for considerably longer than were non-selected cards, and this was taken as support for the existence of pre-conscious heuristics that direct attention towards relevant aspects of a problem. The first experiment reported here fully replicated this effect. However, by systematically varying the task format in subsequent experiments, the effect was found to diminish, disappear, or even reverse. The change in effect size and direction was not accompanied by any systematic variations in the subjects’ card choices, indicating that the changes in taskformat had not altered the operation of the relevance-determining heuristics. On balance, it is suggested that the inspection time effect appears to be artefactual, and the inspection time paradigm therefore does not constitute satisfactory evidence for the existence of pre-conscious heuristics.
Collapse
|
4
|
Nickerson RS, Butler SF, Barch DH. Set size, assertion form, thematic content and sampling in the selection task. THINKING & REASONING 2017. [DOI: 10.1080/13546783.2016.1275795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
| | | | - Daniel H. Barch
- Social, Statistical, & Environmental Sciences Division, Research Triangle Institute International, Waltham, MA
| |
Collapse
|
5
|
Schalk L, Saalbach H, Stern E. Approaches to Foster Transfer of Formal Principles: Which Route to Take? PLoS One 2016; 11:e0148787. [PMID: 26871902 PMCID: PMC4752471 DOI: 10.1371/journal.pone.0148787] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 01/22/2016] [Indexed: 11/18/2022] Open
Abstract
Enabling learners to transfer knowledge about formal principles to new problems is a major aim of science and mathematics education, which, however, is notoriously difficult to reach. Previous research advocates different approaches of how to introduce principles to foster the transfer of knowledge about formal principles. One approach suggests teaching a generic formalism of the principles. Another approach suggests presenting (at least) two concrete cases instantiating the principle. A third approach suggests presenting a generic formalism accompanied by a case. As yet, though, empirical results regarding the transfer potential of these approaches are mixed and difficult to integrate as the three approaches have rarely been tested competitively. Furthermore, the approaches have been evaluated in relation to different control conditions, and they have been assessed using varying transfer measures. In the present experiment, we introduced undergraduates to the formal principles of propositional logic with the aim to systematically compare the transfer potential of the different approaches in relation to each other and to a common control condition by using various learning and transfer tasks. Results indicate that all approaches supported successful learning and transfer of the principles, but also caused systematic differences in the magnitude of transfer. Results indicate that the combination of a generic formalism with a case was surprisingly unsuccessful while learners who compared two cases outperformed the control condition. We discuss how the simultaneous assessment of the different approaches allows to more precisely capture the underlying learning mechanisms and to advance theory on how these mechanisms contribute to transfer performance.
Collapse
|
6
|
Voorspoels W, Navarro DJ, Perfors A, Ransom K, Storms G. How do people learn from negative evidence? Non-monotonic generalizations and sampling assumptions in inductive reasoning. Cogn Psychol 2015. [DOI: 10.1016/j.cogpsych.2015.07.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
7
|
Rusconi P, McKenzie CRM. Insensitivity and Oversensitivity to Answer Diagnosticity in Hypothesis Testing. Q J Exp Psychol (Hove) 2013; 66:2443-64. [DOI: 10.1080/17470218.2013.793732] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Two experiments examined how people perceive the diagnosticity of different answers (“yes” and “no”) to the same question. We manipulated whether the “yes” and the “no” answers conveyed the same amount of information or not, as well as the presentation format of the probabilities of the features inquired about. In Experiment 1, participants were presented with only the percentages of occurrence of the features, which most straightforwardly apply to the diagnosticity of “yes” answers. In Experiment 2, participants received in addition the percentages of the absence of features, which serve to assess the diagnosticity of “no” answers. Consistent with previous studies, we found that participants underestimated the difference in the diagnosticity conveyed by different answers to the same question. However, participants' insensitivity was greater when the normative (Bayesian) diagnosticity of the “no” answer was higher than that of the “yes” answer. We also found oversensitivity to answer diagnosticity, whereby participants valued as differentially diagnostic two answers that were normatively equal in terms of their diagnosticity. Presenting to participants the percentages of occurrence of the features inquired about together with their complements increased their sensitivity to the diagnosticity of answers. We discuss the implications of these findings for confirmation bias in hypothesis testing.
Collapse
Affiliation(s)
- Patrice Rusconi
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Craig R. M. McKenzie
- Rady School of Management and Department of Psychology, University of California, San Diego, San Diego, CA, USA
| |
Collapse
|
8
|
Kayhan VO. Seeking health information on the web: Positive hypothesis testing. Int J Med Inform 2013; 82:268-75. [DOI: 10.1016/j.ijmedinf.2012.12.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2012] [Revised: 12/04/2012] [Accepted: 12/11/2012] [Indexed: 10/27/2022]
|
9
|
Roberts MJ, Newton EJ. Rapid-response versus free-time selection tasks using different logical connectives. JOURNAL OF COGNITIVE PSYCHOLOGY 2011. [DOI: 10.1080/20445911.2011.584526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
10
|
Electrophysiological correlates of hypothesis evaluation: revealed with a modified Wason's selection task. Brain Res 2011; 1408:17-26. [PMID: 21774918 DOI: 10.1016/j.brainres.2011.06.056] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2011] [Revised: 06/22/2011] [Accepted: 06/23/2011] [Indexed: 01/01/2023]
Abstract
A modified Wason's selection task was used to explore brain correlates of hypothesis evaluation, a core process of hypothesis testing. Twenty-two undergraduate participants (11 males, 11 females) were provided with a proposition (hypothesis) and a card. They were asked to evaluate whether the card verified or falsified the given proposition while event related potentials (ERP) were measured. Behavioral results showed that participants required less time to make correct responses in verification conditions than in falsification conditions. The ERPs time-locked to the second side of each card showed that (1) smaller amplitudes of P2 were elicited in backward falsification than in backward verification, which reflected a lower intensity of perception; (2) a profound negative deflection was found in falsification conditions compared to verification conditions during the N2 time window, which implied the processing of conflicting information; (3) in comparison to verification conditions, falsification conditions evoked a decreased P3 component, which was linked to the process of hypothesis evaluation; and (4) a late positive component (400-600 ms) was only triggered in the forward falsification condition, reflecting the manipulation of cognitive context.
Collapse
|
11
|
Broadening the study of inductive reasoning: confirmation judgments with uncertain evidence. Mem Cognit 2011; 38:941-50. [PMID: 20921106 DOI: 10.3758/mc.38.7.941] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although evidence in real life is often uncertain, the psychology of inductive reasoning has, so far, been confined to certain evidence. The present study extends previous research by investigating whether people properly estimate the impact of uncertain evidence on a given hypothesis. Two experiments are reported, in which the uncertainty of evidence is explicitly (by means of numerical values) versus implicitly (by means of ambiguous pictures) manipulated. The results show that people's judgments are highly correlated with those predicted by normatively sound Bayesian measures of impact. This sensitivity to the degree of evidential uncertainty supports the centrality of inductive reasoning in cognition and opens the path to the study of this issue in more naturalistic settings.
Collapse
|
12
|
Abstract
To be rational is to be able to reason. Thirty years ago psychologists believed that human reasoning depended on formal rules of inference akin to those of a logical calculus. This hypothesis ran into difficulties, which led to an alternative view: reasoning depends on envisaging the possibilities consistent with the starting point--a perception of the world, a set of assertions, a memory, or some mixture of them. We construct mental models of each distinct possibility and derive a conclusion from them. The theory predicts systematic errors in our reasoning, and the evidence corroborates this prediction. Yet, our ability to use counterexamples to refute invalid inferences provides a foundation for rationality. On this account, reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences.
Collapse
|
13
|
|
14
|
|
15
|
Cherubini P, Rusconi P, Russo S, Di Bari S, Sacchi S. Preferences for different questions when testing hypotheses in an abstract task: positivity does play a role, asymmetry does not. Acta Psychol (Amst) 2010; 134:162-74. [PMID: 20223439 DOI: 10.1016/j.actpsy.2010.01.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2008] [Revised: 01/08/2010] [Accepted: 01/22/2010] [Indexed: 11/19/2022] Open
Abstract
Previous studies on hypothesis-testing behaviour have reported systematic preferences for posing positive questions (i.e., inquiries about features that are consistent with the truth of the hypothesis) and different types of asymmetric questions (i.e., questions where the hypothesis confirming and the hypothesis disconfirming responses have different evidential strength). Both tendencies can contribute - in some circumstances - to confirmation biases (i.e., the improper acceptance or maintenance of an incorrect hypothesis). The empirical support for asymmetric testing is, however, scarce and partly contradictory, and the relative strength of positive testing and asymmetric testing has not been empirically compared. In four studies where subjects were asked to select (Experiment 1) or evaluate (Experiments 2-4) questions for controlling an abstract hypothesis, we orthogonally balanced the positivity/negativity of questions by their symmetry/asymmetry (Experiments 1-3), or by the type of asymmetry (confirmatory vs disconfirmatory; Experiment 4). In all Experiments participants strongly preferred positive to negative questions. Their choices were on the other hand mostly unaffected by symmetry and asymmetry in general, or - more specifically - by different types of asymmetry. Other results indicated that participants were sensitive to the diagnosticity of the questions (Experiments 1-3), and that they preferred testing features with a high probability under the focal hypothesis (Experiment 4). In the discussion we argue that recourse to asymmetric testing - observed in some previous studies using more contextualized problems - probably depends on context-related motivations and prior knowledge. In abstract tasks, where that knowledge is not available, more simple strategies - such as positive testing - are prevalent.
Collapse
Affiliation(s)
- Paolo Cherubini
- Department of Psychology, University of Milano-Bicocca, Milano, Italy.
| | | | | | | | | |
Collapse
|
16
|
Hattori M, Oaksford M. Adaptive Non-Interventional Heuristics for Covariation Detection in Causal Induction: Model Comparison and Rational Analysis. Cogn Sci 2010; 31:765-814. [DOI: 10.1080/03640210701530755] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
17
|
Précis of bayesian rationality: The probabilistic approach to human reasoning. Behav Brain Sci 2009; 32:69-84; discussion 85-120. [PMID: 19210833 DOI: 10.1017/s0140525x09000284] [Citation(s) in RCA: 164] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic--the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. Bayesian Rationality argues that rationality is defined instead by the ability to reason about uncertainty. Although people are typically poor at numerical reasoning about probability, human thought is sensitive to subtle patterns of qualitative Bayesian, probabilistic reasoning. In Chapters 1-4 of Bayesian Rationality (Oaksford & Chater 2007), the case is made that cognition in general, and human everyday reasoning in particular, is best viewed as solving probabilistic, rather than logical, inference problems. In Chapters 5-7 the psychology of "deductive" reasoning is tackled head-on: It is argued that purportedly "logical" reasoning problems, revealing apparently irrational behaviour, are better understood from a probabilistic point of view. Data from conditional reasoning, Wason's selection task, and syllogistic inference are captured by recasting these problems probabilistically. The probabilistic approach makes a variety of novel predictions which have been experimentally confirmed. The book considers the implications of this work, and the wider "probabilistic turn" in cognitive science and artificial intelligence, for understanding human rationality.
Collapse
|
18
|
Naïve optimality: Subjects' heuristics can be better motivated than experimenters' optimal models. Behav Brain Sci 2009. [DOI: 10.1017/s0140525x09000405] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractIs human cognition best described by optimal models, or by adaptive but suboptimal heuristic strategies? It is frequently hard to identify which theoretical model is normatively best justified. In the context of information search, naïve subjects' heuristic strategies are better motivated than some “optimal” models.
Collapse
|
19
|
Abstract
It has been unclear whether optimal experimental design accounts of data selection may offer insight into evidence acquisition tasks in which the learner's beliefs change greatly during the course of learning. Data from Rehder and Hoffman's eye movement version of Shepard, Horland and Jenkins' classic concept learning task provide an opportunity to address these issues. We introduce a principled probabilistic concept-learning model that describes the development of subjects' beliefs on that task. We use that learning model, together with a sampling function inspired by theory of optimal experimental design, to predict subjects' eye movements on the active learning version of that task. Results show that the same rational sampling function can predict eye movements early in learning, when uncertainty is high, as well as late in learning when the learner is certain of the true category.
Collapse
Affiliation(s)
- Jonathan D Nelson
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Rd., La Jolla, CA 92037 1099, USA
| | | |
Collapse
|
20
|
McKenzie CRM, Mikkelsen LA. A Bayesian view of covariation assessment. Cogn Psychol 2006; 54:33-61. [PMID: 16764849 DOI: 10.1016/j.cogpsych.2006.04.004] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2005] [Accepted: 04/19/2006] [Indexed: 11/28/2022]
Abstract
When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) participants' prior beliefs about the variables' relationship influence judgment. Both phenomena represent departures from the traditional normative model (the phi coefficient or related measures) and have therefore been interpreted as systematic errors. However, both phenomena are consistent with a Bayesian approach to the task. From a Bayesian perspective: (a) joint presence is normatively more informative than joint absence if the presence of variables is rarer than their absence, and (b) failing to incorporate prior beliefs is a normative error. Empirical evidence is reported showing that joint absence is seen as more informative than joint presence when it is clear that absence of the variables, rather than their presence, is rare.
Collapse
Affiliation(s)
- Craig R M McKenzie
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093-0109, USA.
| | | |
Collapse
|
21
|
Abstract
In two experiments, we investigated the relationship between confidence and accuracy in syllogistic reasoning. Participants judged the validity of conclusions and provided confidence ratings twice for each problem: once quickly and again after further deliberation. Correlations between confidence and accuracy were small or nonexistent. In addition, confidence and accuracy were mediated by different variables. Confidence judgments appeared to reflect external cues, so that confidence was greater when the participants were allowed additional time to think about the problem, as well as when the conclusion was either believable or unbelievable, rather than neutral. In contrast, accuracy changed little as a function of the amount of time available and did not differ for believable and neutral problems. These data support a model in which initial decisions are made quickly, on the basis of heuristic cues, and analytic processes are used to justify or rationalize the earlier decision.
Collapse
|
22
|
Nelson JD. Finding useful questions: on Bayesian diagnosticity, probability, impact, and information gain. Psychol Rev 2006; 112:979-99. [PMID: 16262476 DOI: 10.1037/0033-295x.112.4.979] [Citation(s) in RCA: 104] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Several norms for how people should assess a question's usefulness have been proposed, notably Bayesian diagnosticity, information gain (mutual information), Kullback-Liebler distance, probability gain (error minimization), and impact (absolute change). Several probabilistic models of previous experiments on categorization, covariation assessment, medical diagnosis, and the selection task are shown to not discriminate among these norms as descriptive models of human intuitions and behavior. Computational optimization found situations in which information gain, probability gain, and impact strongly contradict Bayesian diagnosticity. In these situations, diagnosticity's claims are normatively inferior. Results of a new experiment strongly contradict the predictions of Bayesian diagnosticity. Normative theoretical concerns also argue against use of diagnosticity. It is concluded that Bayesian diagnosticity is normatively flawed and empirically unjustified.
Collapse
Affiliation(s)
- Jonathan D Nelson
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093-0515, USA.
| |
Collapse
|
23
|
Abstract
Framing effects occur when logically equivalent redescriptions of objects or outcomes lead to different behaviors, and, traditionally, such effects have been seen as irrational. However, recent evidence has shown that a speaker's choice among logically equivalent attribute frames can implicitly convey (or "leak") normatively relevant information about the speaker's reference point, among other things. In a reinterpretion of data published elsewhere, in this article it is shown that some common effects in inference tasks (covariation assessment and hypothesis testing) can also be seen as framing effects, thereby expanding the domain of framing. It is also shown that these framing effects are normatively defensible because normatively relevant information about event rarity is leaked through the description of data and through the phrasing of hypotheses, thereby broadening the information leakage approach to explaining framing effects. Information leakage can also explain why framing effects in such inference tasks disappear under certain conditions.
Collapse
Affiliation(s)
- Craig R M McKenzie
- Department of Psychology, University of California San Diego, La Jolla, CA 92093-0109, USA.
| |
Collapse
|
24
|
Oberauer K, Weidenfeld A, Hörnig R. Logical reasoning and probabilities: A comprehensive test of Oaksford and Chater (2001). Psychon Bull Rev 2004; 11:521-7. [PMID: 15376805 DOI: 10.3758/bf03196605] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We report two experiments testing a central prediction of the probabilistic account of reasoning provided by Oaksford and Chater (2001): Acceptance of standard conditional inferences, card choices in the Wason selection task, and quantifiers chosen for conclusions from syllogisms should vary as a function of the frequency of the concepts involved. Frequency was manipulated by a probability-learning phase preceding the reasoning tasks to simulate natural sampling. The effects predicted by Oaksford and Chater (2001) were not obtained with any of the three paradigms.
Collapse
Affiliation(s)
- Klaus Oberauer
- Department of Psychology, University of Potsdam, Potsdam, Germany.
| | | | | |
Collapse
|
25
|
Abstract
Researchers in thinking and reasoning have proposed recently that there are two distinct cognitive systems underlying reasoning. System 1 is old in evolutionary terms and shared with other animals: it comprises a set of autonomous subsystems that include both innate input modules and domain-specific knowledge acquired by a domain-general learning mechanism. System 2 is evolutionarily recent and distinctively human: it permits abstract reasoning and hypothetical thinking, but is constrained by working memory capacity and correlated with measures of general intelligence. These theories essentially posit two minds in one brain with a range of experimental psychological evidence showing that the two systems compete for control of our inferences and actions.
Collapse
|
26
|
Abstract
When people's behavior in laboratory tasks systematically deviates from a rational model, the implication is that real-world performance could be improved by changing the behavior. However, recent studies suggest that behavioral violations of rational models are at least sometimes the result of strategies that are well adapted to the real world (and not necessarily to the laboratory task). Thus, even if one accepts that certain behavior in the laboratory is irrational, compelling evidence that real-world behavior ought to change accordingly is often lacking. It is suggested here that rational models be seen as theories, and not standards, of behavior.
Collapse
Affiliation(s)
- Craig R.M. McKenzie
- Department of Psychology, University of California, San Diego, 9500 Gilman Drive, 92093-0109, La Jolla CA, USA
| |
Collapse
|
27
|
Abstract
Since it first appeared, there has been much research and critical discussion on the theory of optimal data selection as an explanation of Wason's (1966,1968) selection task (Oaksford & Chater, 1994). In this paper, this literature is reviewed, and the theory of optimal data selection is reevaluated in its light. The information gain model is first located in the current theoretical debate in the psychology of reasoning concerning dual processes in human reasoning. A model comparison exercise is then presented that compares a revised version of the model with its theoretical competitors. Tests of the novel predictions of the model are then reviewed. This section also reviews experiments claimed not to be consistent with optimal data selection. Finally, theoretical criticisms of optimal data selection are discussed. It is argued either that the revised model accounts for them or that they do not stand up under analysis. It is concluded that some version of the optimal data selection model still provides the best account of the selection task. Consequently, the conclusion of Oaksford and Chater's (1994) original rational analysis (Anderson, 1990), that people's hypothesis-testing behavior on this task is rational and well adapted to the environment, still stands.
Collapse
Affiliation(s)
- Mike Oaksford
- School of Psychology, Cardiff University, Cardiff, Wales.
| | | |
Collapse
|
28
|
Abstract
The study of deductive reasoning has been a major paradigm in psychology for approximately the past 40 years. Research has shown that people make many logical errors on such tasks and are strongly influenced by problem content and context. It is argued that this paradigm was developed in a context of logicist thinking that is now outmoded. Few reasoning researchers still believe that logic is an appropriate normative system for most human reasoning, let alone a model for describing the process of human reasoning, and many use the paradigm principally to study pragmatic and probabilistic processes. It is suggested that the methods used for studying reasoning be reviewed, especially the instructional context, which necessarily defines pragmatic influences as biases.
Collapse
Affiliation(s)
- Jonathan St B T Evans
- Centre for Thinking and Language, Department of Psychology, University of Plymouth, England.
| |
Collapse
|
29
|
Abstract
The authors outline a theory of conditionals of the form If A then C and If A then possibly C. The 2 sorts of conditional have separate core meanings that refer to sets of possibilities. Knowledge, pragmatics, and semantics can modulate these meanings. Modulation can add information about temporal and other relations between antecedent and consequent. It can also prevent the construction of possibilities to yield 10 distinct sets of possibilities to which conditionals can refer. The mental representation of a conditional normally makes explicit only the possibilities in which its antecedent is true, yielding other possibilities implicitly. Reasoners tend to focus on the explicit possibilities. The theory predicts the major phenomena of understanding and reasoning with conditionals.
Collapse
Affiliation(s)
- P N Johnson-Laird
- Department of Psychology, Princeton University, New Jersey 08544, USA.
| | | |
Collapse
|
30
|
|
31
|
|
32
|
Abstract
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning.
Collapse
Affiliation(s)
- M Oaksford
- School of Psychology, Cardiff University, PO Box 901, CF1 3YG, Cardiff, UK
| | | |
Collapse
|
33
|
McKenzie CRM, Ferreira VS, Mikkelsen LA, McDermott KJ, Skrable RP. Do Conditional Hypotheses Target Rare Events? ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2001; 85:291-309. [PMID: 11461203 DOI: 10.1006/obhd.2000.2947] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
When testing hypotheses, rare or unexpected observations are normatively more informative than common observations, and recent studies have shown that participants' behavior reflects this principle. Research has also shown that, when asked to test conditional hypotheses ("If X, then Y") that are abstract or unfamiliar, participants overwhelmingly consider a supporting observation mentioned in the hypothesis (X&Y) to be more informative than a supporting observation not mentioned ( approximately X approximately Y). These two empirical findings would mesh well if conditional hypotheses tend to be phrased in terms of rare, rather than common, events. Six experiments are reported indicating that people do have a tendency-often a very strong one-to phrase conditional hypotheses in terms of rare events. Thus, observations mentioned in conditional hypotheses might generally be considered highly informative because they usually are highly informative. Copyright 2001 Academic Press.
Collapse
|
34
|
Abstract
People often test hypotheses about two variables (X and Y), each with two levels (e.g., X1 and X2). When testing "If X1, then Y1," observing the conjunction of X1 and Y1 is overwhelmingly perceived as more supportive than observing the conjunction of X2 and Y2, although both observations support the hypothesis. Normatively, the X2&Y2 observation provides stronger support than the X1&Y1 observation if the former is rarer. Because participants in laboratory settings typically test hypotheses they are unfamiliar with, previous research has not examined whether participants are sensitive to the rarity of observations. The experiment reported here showed that participants were sensitive to rarity, even judging a rare X2&Y2 observation more supportive than a common X1&Y1 observation under certain conditions. Furthermore, participants' default strategy of judging X1&Y1 observations more informative might be generally adaptive because hypotheses usually regard rare events.
Collapse
Affiliation(s)
- C R McKenzie
- Department of Psychology, University of California, San Diego, La Jolla 92093-0109, USA.
| | | |
Collapse
|
35
|
|
36
|
Abstract
The mental model theory postulates that reasoners build models of the situations described in premises, and that these models normally make explicit only what is true. The theory has an unexpected consequence: it predicts the occurrence of inferences that are compelling but invalid. They should arise from reasoners failing to take into account what is false. Three experiments corroborated the systematic occurrence of these illusory inferences, and eliminated a number of alternative explanations for them. Their results illuminate the controversy among various current theories of reasoning.
Collapse
|
37
|
Oberauer K, Wilhelm O, Diaz RR. Bayesian Rationality for the Wason Selection Task? A Test of Optimal Data Selection Theory. THINKING & REASONING 1999. [DOI: 10.1080/135467899394020] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
38
|
Abstract
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic.
Collapse
Affiliation(s)
- N Chater
- University of Warwick, Coventry, United Kingdom.
| | | |
Collapse
|
39
|
Abstract
This chapter describes the main accounts of deductive competence, which explain what is computed in carrying out deductions. It argues that people have a modicum of competence, which is useful in daily life and a prerequisite for acquiring logical expertise. It outlines the three main sorts of theory of deductive performance, which explain how people make deductions: They rely on factual knowledge, formal rules, or mental models. It reviews recent experimental studies of deductive reasoning in order to help readers to assess these theories of performance.
Collapse
Affiliation(s)
- P N Johnson-Laird
- Department of Psychology, Princeton University, Princeton, New Jersey 08544, USA.
| |
Collapse
|
40
|
E. Stanovich K, West RF. Cognitive Ability and Variation in Selection Task Performance. THINKING & REASONING 1998. [DOI: 10.1080/135467898394139] [Citation(s) in RCA: 65] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
41
|
Abstract
Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.
Collapse
|
42
|
Stanovich KE, West RF. Who uses base rates and P(D/approximately H)? An analysis of individual differences. Mem Cognit 1998; 26:161-79. [PMID: 9519706 DOI: 10.3758/bf03211379] [Citation(s) in RCA: 43] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
In two experiments, involving over 900 subjects, we examined the cognitive correlates of the tendency to view P(D/approximately H) and base rate information as relevant to probability assessment. We found that individuals who viewed P(D/approximately H) as relevant in a selection task and who used it to make the proper Bayesian adjustment in a probability assessment task scored higher on tests of cognitive ability and were better deductive and inductive reasoners. They were less biased by prior beliefs and more data-driven on a covariation assessment task. In contrast, individuals who thought that base rates were relevant did not display better reasoning skill or higher cognitive ability. Our results parallel disputes about the normative status of various components of the Bayesian formula in interesting ways. It is argued that patterns of covariance among reasoning tasks may have implications for inferences about what individuals are trying to optimize in a rational analysis (J. R. Anderson, 1990, 1991).
Collapse
Affiliation(s)
- K E Stanovich
- Department of Applied Psychology, Ontario Institute for Studies in Education, University of Toronto, ON, Canada.
| | | |
Collapse
|