1
|
Xie B, Hayes B. Sensitivity to Evidential Dependencies in Judgments Under Uncertainty. Cogn Sci 2022; 46:e13144. [PMID: 35579865 PMCID: PMC9285361 DOI: 10.1111/cogs.13144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 02/25/2022] [Accepted: 04/06/2022] [Indexed: 11/29/2022]
Abstract
According to Bayesian models of judgment, testimony from independent informants has more evidential value than dependent testimony. Three experiments investigated learners' sensitivity to this distinction. Each experiment used a social version of the balls‐and‐urns task, in which participants judged which of two urns was the most likely source of evidence presented by multiple informants. Informants either provided independent testimony based solely on their own observations or dependent‐sequential testimony that considered the testimonies of previous informants. Although participants updated their beliefs with additional evidence, this updating was generally insensitive to evidential dependency (Experiments 1 and 2). A notable exception was when individuals were separated according to their beliefs about the relative value of independent and sequential evidence. Those who viewed independent evidence as having greater value subsequently gave more weight to independent testimony in the balls‐and‐urns task (Experiment 3), in line with the predictions of a Bayesian model. Our findings suggest that only a minority of individuals conform to Bayesian predictions in the relative weighting of independent and dependent evidence in judgments under uncertainty.
Collapse
Affiliation(s)
- Belinda Xie
- School of Psychology, University of New South Wales Sydney
| | - Brett Hayes
- School of Psychology, University of New South Wales Sydney
| |
Collapse
|
2
|
Abstract
The diversity principle-the intuitive notion that diverse evidence is, all else equal, more persuasive, suggestive, confirmatory, or otherwise better than less varied sets of evidence-is a clear component of scientific practice and endorsed by scientists and philosophers alike. A great body of psychological research on people's understanding and application of the diversity principle exists, yet it remains divided into multiple, distinct research communities, which often come to conflicting conclusions. One reason for this is that the range of tasks and domains investigated is appropriately wide. Without a common understanding of what it means for evidence to be diverse, however, it is hard to compare what are at times diverging results. To address this, I review three perspectives from philosophy on what makes diverse evidence valuable. I will use the perspectives to frame results from psychology and assess whether people understand the value of diverse evidence on both an intuitive and explicit level. My conclusions have a leveled optimism: While people are generally aware of the value of diverse evidence, they often struggle to apply what they know. I argue this is because people do not assess the diversity of their evidence as a matter of course but rely on its intuitive diversity as a cue to its evidential diversity. When this cue is absent, people can overlook otherwise obvious problems with their evidence. This has potential consequences for how people seek out, evaluate, and understand evidence from a variety of domains, but leaves open the possibility that various interventions-such as education or reminders to attend to the quality of evidence-may increase people's application of what they know.'
Collapse
|
3
|
Gesser-Edelsburg A, Zemach M, Hijazi R. Who are the "Real" Experts? The Debate Surrounding COVID-19 Health Risk Management: An Israeli Case Study. Risk Manag Healthc Policy 2021; 14:2553-2569. [PMID: 34188567 PMCID: PMC8232964 DOI: 10.2147/rmhp.s311334] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 06/02/2021] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND The uncertainty surrounding the COVID-19 crisis and the different approaches taken to manage it have triggered scientific controversies among experts. This study seeks to examine how the fragile nature of Israeli democracy accommodated differences of opinion between experts during the COVID-19 crisis. OBJECTIVE To map and analyze the discourse between experts surrounding issues that were the topic of scientific controversy. To examine the viewpoints of the public regarding the positions of the different experts. METHODS AND SAMPLE A sequential mixed study design. The qualitative research was a discourse analysis of 435 items that entailed mapping the voices of different experts regarding controversial topics. In the quantitative study, a total of 924 participants answered a questionnaire examining topics that engendered differences of opinion between the experts. RESULTS The results showed that there was no dialogue between opposition and coalition experts. Moreover, the coalition experts labeled the experts who criticized them as "coronavirus deniers" and "anti-vaxxers." The coalition changed its opinion on one issue only-the issue of lockdowns. When we asked the public how they see the scientific controversy between the coalition and the opposition experts, they expressed support for opposition policies on matters related to the implications of the lockdowns and to transparency, while supporting government policy mainly on topics related to vaccinations. The research findings also indicate that personal and socio-demographic variables can influence how the public responds to the debate between experts. The main differentiating variables were the personal attribute of conservatism, locus of control, age, and nationality. CONCLUSION Controversy must be encouraged to prevent misconceptions. The internal discourse in the committees that advise the government must be transparent, and coalition experts must be consistently exposed to the views of opposition experts, who must be free to voice their views without fear.
Collapse
Affiliation(s)
- Anat Gesser-Edelsburg
- School of Public Health and the Health and Risk Communication Research Center, University of Haifa, Haifa, 3498838, Israel
| | - Mina Zemach
- Midgam Research & Consulting Ltd, Bnei Brak, 5126112, Israel
| | - Rana Hijazi
- School of Public Health and the Health and Risk Communication Research Center, University of Haifa, Haifa, 3498838, Israel
| |
Collapse
|
4
|
Dependencies in evidential reports: The case for informational advantages. Cognition 2020; 204:104343. [PMID: 32599310 DOI: 10.1016/j.cognition.2020.104343] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 05/22/2020] [Accepted: 05/27/2020] [Indexed: 11/22/2022]
Abstract
Whether assessing the accuracy of expert forecasting, the pros and cons of group communication, or the value of evidence in diagnostic or predictive reasoning, dependencies between experts, group members, or evidence have traditionally been seen as a form of redundancy. We demonstrate that this conception of dependence conflates the structure of a dependency network, and the observations across this network. By disentangling these two elements we show, via mathematical proof and specific examples, that there are cases where dependencies yield an informational advantage over independence. More precisely, when a structural dependency exists, but observations are either partial or contradicting, these observations provide more support to a hypothesis than when this structural dependency does not exist, ceteris paribus. Furthermore, we show that lay reasoners endorse sufficient assumptions underpinning these advantageous structures yet fail to appreciate their implications for probability judgments and belief revision.
Collapse
|
5
|
Mercier H, Morin O. Majority rules: how good are we at aggregating convergent opinions? EVOLUTIONARY HUMAN SCIENCES 2019; 1:e6. [PMID: 37588400 PMCID: PMC10427311 DOI: 10.1017/ehs.2019.6] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Mathematical models and simulations demonstrate the power of majority rules, i.e. following an opinion shared by a majority of group members. Majority opinion should be followed more when (a) the relative and absolute size of the majority grow, the members of the majority are (b) competent, and (c) benevolent, (d) the majority opinion conflicts less with our prior beliefs and (e) the members of the majority formed their opinions independently. We review the experimental literature bearing on these points. The few experiments bearing on (b) and (c) suggest that both factors are adequately taken into account. Many experiments show that (d) is also followed, with participants usually putting too much weight on their own opinion relative to that of the majority. Regarding factors (a) and (e), in contrast, the evidence is mixed: participants sometimes take into account optimally the absolute and relative size of the majority, as well as the presence of informational dependencies. In other circumstances, these factors are ignored. We suggest that an evolutionary framework can help make sense of these conflicting results by distinguishing between evolutionarily valid cues - that are readily taken into account - and non-evolutionarily valid cues - that are ignored by default.
Collapse
Affiliation(s)
- Hugo Mercier
- Institut Jean Nicod, PSL University, CNRS, ParisFrance
| | - Olivier Morin
- Max Planck institute for the Science of Human History, Jena, Germany
| |
Collapse
|
6
|
Barneron M, Allalouf A, Yaniv I. Rate it again: Using the wisdom of many to improve performance evaluations. JOURNAL OF BEHAVIORAL DECISION MAKING 2019. [DOI: 10.1002/bdm.2127] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Meir Barneron
- Department of PsychologyHebrew University of Jerusalem and National Institute for Testing and Evaluation Jerusalem Israel
| | - Avi Allalouf
- National Institute for Testing and Evaluation Jerusalem Israel
| | - Ilan Yaniv
- Department of Psychology and the Federmann Center for the Study of RationalityHebrew University of Jerusalem Jerusalem Israel
| |
Collapse
|
7
|
Zhao WJ, Davis‐Stober CP, Bhatia S. Optimal cue aggregation in the absence of criterion knowledge. JOURNAL OF BEHAVIORAL DECISION MAKING 2019. [DOI: 10.1002/bdm.2123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Wenjia Joyce Zhao
- Department of PsychologyUniversity of Pennsylvania Philadelphia Pennsylvania
| | | | - Sudeep Bhatia
- Department of PsychologyUniversity of Pennsylvania Philadelphia Pennsylvania
| |
Collapse
|
8
|
Blunden H, Logg JM, Brooks AW, John LK, Gino F. Seeker beware: The interpersonal costs of ignoring advice. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2019. [DOI: 10.1016/j.obhdp.2018.12.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
9
|
Calvo H, Carrillo-Mendoza P, Gelbukh A. On redundancy in multi-document summarization1. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2018. [DOI: 10.3233/jifs-169507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Hiram Calvo
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City, Mexico
| | - Pabel Carrillo-Mendoza
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City, Mexico
| | - Alexander Gelbukh
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City, Mexico
| |
Collapse
|
10
|
Factors influencing board of directors’ decision-making process as determinants of CSR engagement. REVIEW OF MANAGERIAL SCIENCE 2016. [DOI: 10.1007/s11846-016-0220-1] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
11
|
Abstract
In daily decision making, people often solicit one another's opinions in the hope of improving their own judgment. According to both theory and empirical results, integrating even a few opinions is beneficial, with the accuracy gains diminishing as the bias of the judges or the correlation between their opinions increases. Decision makers using intuitive policies for integrating others' opinions rely on a variety of accuracy cues in weighting the opinions they receive. They tend to discount dissenters and to give greater weight to their own opinion than to other people's opinions.
Collapse
Affiliation(s)
- Ilan Yaniv
- Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
12
|
Herzog SM, von Helversen B. Strategy Selection Versus Strategy Blending: A Predictive Perspective on Single- and Multi-Strategy Accounts in Multiple-Cue Estimation. JOURNAL OF BEHAVIORAL DECISION MAKING 2016. [DOI: 10.1002/bdm.1958] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
13
|
Abstract
Failure to distinguish between statistical effects and genuine social interaction may lead to unwarranted conclusions about the role of self-differentiation in group function. We offer an introduction to these issues from the perspective of recent research on collaborative cognition.
Collapse
|
14
|
Hahn U, Harris AJL, Corner A. Public Reception of Climate Science: Coherence, Reliability, and Independence. Top Cogn Sci 2015; 8:180-95. [PMID: 26705767 DOI: 10.1111/tops.12173] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2014] [Revised: 01/09/2015] [Accepted: 03/16/2015] [Indexed: 11/28/2022]
Abstract
Possible measures to mitigate climate change require global collective actions whose impacts will be felt by many, if not all. Implementing such actions requires successful communication of the reasons for them, and hence the underlying climate science, to a degree that far exceeds typical scientific issues which do not require large-scale societal response. Empirical studies have identified factors, such as the perceived level of consensus in scientific opinion and the perceived reliability of scientists, that can limit people's trust in science communicators and their subsequent acceptance of climate change claims. Little consideration has been given, however, to recent formal results within philosophy concerning the relationship between truth, the reliability of evidence sources, the coherence of multiple pieces of evidence/testimonies, and the impact of (non-)independence between sources of evidence. This study draws on these results to evaluate exactly what has (and, more important, has not yet) been established in the empirical literature about the factors that bias the public's reception of scientific communications about climate change.
Collapse
Affiliation(s)
- Ulrike Hahn
- Department of Psychological Sciences, Birkbeck, University of London
| | - Adam J L Harris
- Department of Experimental Psychology, University College London
| | | |
Collapse
|
15
|
Debiasing illusion of control in individual judgment: the role of internal and external advice seeking. REVIEW OF MANAGERIAL SCIENCE 2014. [DOI: 10.1007/s11846-014-0144-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
16
|
Fraundorf SH, Benjamin AS. Knowing the crowd within: Metacognitive limits on combining multiple judgments. JOURNAL OF MEMORY AND LANGUAGE 2014; 71:17-38. [PMID: 24511178 PMCID: PMC3915883 DOI: 10.1016/j.jml.2013.10.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We investigated how decision-makers use multiple opportunities to judge a quantity. Decision-makers undervalue the benefit of combining their own judgment with an advisor's, but theories disagree about whether this bias would apply to combining several of one's own judgments. Participants estimated percentage answers to general knowledge questions (e.g., What percent of the world's population uses the Internet?) on two occasions. In a final decision phase, they selected their first, second, or average estimate to report for each question. We manipulated the cues available for this final decision. Given cues to general theories (the labels first guess, second guess, average), participants mostly averaged, but no more frequently on trials where the average was most accurate. Given item-specific cues (numerical values of the options), metacognitive accuracy was at chance. Given both cues, participants mostly averaged and switched strategies based on whichever yielded the most accurate value on a given trial. These results indicate that underappreciation of averaging estimates does not stem only from social differences between the self and an advisor and that combining general and item-specific cues benefits metacognition.
Collapse
|
17
|
Mercier H, Kawasaki Y, Yama H, Adachi K, Van der Henst JB. Is the Use of Averaging in Advice Taking Modulated by Culture? JOURNAL OF COGNITION AND CULTURE 2012. [DOI: 10.1163/156853712x633893] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
18
|
Why don't we believe non-native speakers? The influence of accent on credibility. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2010. [DOI: 10.1016/j.jesp.2010.05.025] [Citation(s) in RCA: 281] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
19
|
|
20
|
Ranganath KA, Spellman BA, Joy-Gaba JA. Cognitive “Category-Based Induction” Research and Social “Persuasion” Research Are Each About What Makes Arguments Believable. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2010; 5:115-22. [DOI: 10.1177/1745691610361604] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Social and cognitive psychologists each study factors that influence the believability of arguments, but they have worked mostly in parallel. We briefly examine and compare the dominant theories explaining argument believability in the social persuasion literature and the cognitive category-based induction literature. Although the two areas ask similar questions, they use different paradigms to study different aspects of the issues. We describe each area’s major paradigms and questions and then examine the conclusions that each area draws regarding the role of five variables important to argument believability: (a) the number of sources/premises, (b) the similarity between sources/premises, (c) individual differences in characteristics of the reasoner, (d) the available resources, and (e) the reasoner’s background knowledge and beliefs. Comparing the two literatures provides a more complete picture of the factors influencing argument believability and provides fruitful new avenues for integration and exploration.
Collapse
Affiliation(s)
- Kate A. Ranganath
- Department of Social Psychology, Tilburg University, Tilburg, The Netherlands
| | | | | |
Collapse
|
21
|
Herzog SM, Hertwig R. The wisdom of many in one mind: improving individual judgments with dialectical bootstrapping. Psychol Sci 2009; 20:231-7. [PMID: 19170937 DOI: 10.1111/j.1467-9280.2009.02271.x] [Citation(s) in RCA: 93] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
The "wisdom of crowds" in making judgments about the future or other unknown events is well established. The average quantitative estimate of a group of individuals is consistently more accurate than the typical estimate, and is sometimes even the best estimate. Although individuals' estimates may be riddled with errors, averaging them boosts accuracy because both systematic and random errors tend to cancel out across individuals. We propose exploiting the power of averaging to improve estimates generated by a single person by using an approach we call dialectical bootstrapping. Specifically, it should be possible to reduce a person's error by averaging his or her first estimate with a second one that harks back to somewhat different knowledge. We derive conditions under which dialectical bootstrapping fosters accuracy and provide an empirical demonstration that its benefits go beyond reliability gains. A single mind can thus simulate the wisdom of many.
Collapse
Affiliation(s)
- Stefan M Herzog
- Department of Psychology, University of Basel, Missionsstrasse 64A, Basel, Switzerland.
| | | |
Collapse
|
22
|
Effects of amount of information on judgment accuracy and confidence. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2008. [DOI: 10.1016/j.obhdp.2008.01.005] [Citation(s) in RCA: 94] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
23
|
Yaniv I, Milyavsky M. Using advice from multiple sources to revise and improve judgments. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2007. [DOI: 10.1016/j.obhdp.2006.05.006] [Citation(s) in RCA: 86] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
24
|
Leal J, Wordsworth S, Legood R, Blair E. Eliciting expert opinion for economic models: an applied example. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2007; 10:195-203. [PMID: 17532812 DOI: 10.1111/j.1524-4733.2007.00169.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
OBJECTIVES Expert opinion is considered as a legitimate source of information for decision-analytic modeling where required data are unavailable. Our objective was to develop a practical computer-based tool for eliciting expert opinion about the shape of the uncertainty distribution around individual model parameters. METHODS We first developed a prepilot survey with departmental colleagues to test a number of alternative approaches to eliciting opinions on the shape of the uncertainty distribution around individual parameters. This information was used to develop a survey instrument for an applied clinical example. This involved eliciting opinions from experts to inform a number of parameters involving Bernoulli processes in an economic model evaluating DNA testing for families with a genetic disease, hypertrophic cardiomyopathy. The experts were cardiologists, clinical geneticists, and laboratory scientists working with cardiomyopathy patient populations and DNA testing. RESULTS Our initial prepilot work suggested that the more complex elicitation techniques advocated in the literature were difficult to use in practice. In contrast, our approach achieved a reasonable response rate (50%), provided logical answers, and was generally rated as easy to use by respondents. The computer software user interface permitted graphical feedback throughout the elicitation process. The distributions obtained were incorporated into the model, enabling the use of probabilistic sensitivity analysis. CONCLUSION There is clearly a gap in the literature between theoretical elicitation techniques and tools that can be used in applied decision-analytic models. The results of this methodological study are potentially valuable for other decision analysts deriving expert opinion.
Collapse
Affiliation(s)
- José Leal
- Health Economics Research Centre, Department of Public Health, University of Oxford, Old Road Campus, Oxford, UK.
| | | | | | | |
Collapse
|
25
|
Budescu DV, Yu HT. Aggregation of opinions based on correlated cues and advisors. JOURNAL OF BEHAVIORAL DECISION MAKING 2007. [DOI: 10.1002/bdm.547] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
26
|
Bonaccio S, Dalal RS. Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2006. [DOI: 10.1016/j.obhdp.2006.07.001] [Citation(s) in RCA: 498] [Impact Index Per Article: 27.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
Thirst for confirmation in multi-attribute choice: Does search for consistency impair decision performance? ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2006. [DOI: 10.1016/j.obhdp.2005.09.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
28
|
Winkler RL, Clemen RT. Multiple Experts vs. Multiple Methods: Combining Correlation Assessments. DECISION ANALYSIS 2004. [DOI: 10.1287/deca.1030.0008] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
29
|
Abstract
Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This contrasts with choices between 2 possible answers to a question, which showed much less overconfidence. The authors show that overconfidence in interval estimates can result from variability in setting interval widths. However, the main cause is that subjective intervals are systematically too narrow given the accuracy of one's information-sometimes only 40% as large as necessary to be well calibrated. The degree of overconfidence varies greatly depending on how intervals are elicited. There are also substantial differences among domains and between male and female judges. The authors discuss the possible psychological mechanisms underlying this pattern of findings.
Collapse
|
30
|
Harries C, Yaniv I, Harvey N. Combining advice: the weight of a dissenting opinion in the consensus. JOURNAL OF BEHAVIORAL DECISION MAKING 2004. [DOI: 10.1002/bdm.474] [Citation(s) in RCA: 58] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
31
|
Budescu DV, Rantilla AK, Yu HT, Karelitz TM. The effects of asymmetry among advisors on the aggregation of their opinions. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2003. [DOI: 10.1016/s0749-5978(02)00516-2] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
32
|
Abstract
We investigate the case of a single decision maker (DM) who obtains probabilistic forecasts regarding the occurrence of a unique target event from J distinct, symmetric, and equally diagnostic expert advisors (judges). The paper begins with a mathematical model of DM's aggregation process of expert opinions, in which confidence in the final aggregate is shown to be inversely related to its perceived variance. As such, confidence is expected to vary as a function of factors such as the number of experts, the total number of cues, the fraction of cues available to each expert, the level of inter-expert overlap in information, and the range of experts' opinions. In the second part of the paper, we present results from two experiments that support the main (ordinal) predictions of the model.
Collapse
Affiliation(s)
- D V Budescu
- Department of Psychology, University of Illinois at Urbana-Champaign (UIUC) 61820, USA.
| | | |
Collapse
|